var/home/core/zuul-output/0000755000175000017500000000000015140076572014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140112017015461 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000353626715140111676020276 0ustar corecoreikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB…Eڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5֚!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!frH_HI\:U}UE$J @ٚeZE0(8. ϟ'+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;d+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-\39T|L /~p柿x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sv*+{[or@x,))[o新#.͞.;=Fsg31zYYy[N 1m٢ڶEͦAc?-֋6rR)? I?ytwpC'P/9} ƘwXe就9bQQ!.(GNp$d(3 %רx%z(o6jp}vE_Bf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿k2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fct[F_hdxMUY.b=eaI3Z=}sQVΖj?+c;j FRrI5%N/K;Dk rCbm7чsSW_8g{翴RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS maT;{З>'[LR"w F05N< euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟ _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/|&q̑0dd4>vk 60D _o~[Kw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$wMm[eG`̵E$uLrk-$_{$# $B*hN/ٟPE[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэor,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I]oHWy_Bg `9v&clx nJ|=V7)Y>[V11c&D]Y5ZOѣ$i,*γ'[Jd f o]$MQbJ/44\\3ƏUX,4xEW)Tg,%3]ьiQG/i {1~5^S>-qrC%X%+d|SRmSRbtSR +#:k}#4SFH^_+cC=n.Í,+bv]䍀zr nG 4h`m,2#jʈE܈ wfڀ1 }c1.i\+ZӉW;gޛ5q|9Sɰ7|ݡZjĈI&Ǣy:x'+ ֿ;<馘EƃW{u: \2 -ߴ_5E:y?#bN0572O>>!ˍפ a,I_ӹ Noml{fyuw( IH~wSZ̖kK%ƙRH|sp>NMtuܴur3tó^hliX֣h?b@hb&qsi<>mK\%B5Mں;}|c|?Mb븎K$DKkT 8%8ZHGc$XnՅ"wGBѨLaK]紩,ZQ=#z߫ge,`mPjŒfg=*,bD%^L<[fp M^OWKe6Jό}fDyg96B <Џ8uӲl06<3,nfg<yS? :NcϏ'#Eqj} PX$i2 L̪˒H%u|+>%[]d%t>Ѵ5kZ \+y" 8&9{3"&-i UNE$ >>2]tR/K.aMu!cZStrdQEvvGwYVVLߒZWZ7/&Q^NRXIV Ttj>[:VڤRDcM)0 Z xʣz"sʦ?$,y/ALt5u% V3KVSZ+.c{bA!8% b ~AI7}W du󋃡2yG*vogLJSa`])?hIr'4%._7zq?ʖ{wW/A 6WQ$"]R#mo=y8n<^|?>goXv)X%~vSfqvOx =7/g(,/6<}t G?)ö+Sqlw]סN I8wփPIJQT gEAHϳ6nBpPT)&TLp/Rqݟ=<;>˳LMrk b:@Wd>Ubk~I)h<@[1S̛,EqvٷBe9%+qz.-/e [#bW4-#P ;~|w+l|>Bq:p Q\MQ sw}=\4|paҾӐ>:Dapv_δ%܎ߝȬ*7Mqv򯓳G'/i7s7tM(ƏYAJsԋ/n0XOp@|tK[G~߶vqAxYFGW79Ht(vJm~bI%88%qq4u 7)ʫR _yG1:u.9MSS*j`3Ar$bX7 a"guόBaZYti)>q@wLRi`!yCρװw_%i[`9[`i~(_Y{[ WۉQĦӧ{(j$%gZU4"iX'@qt,ϮӼ4(pXIwkijxpEK4 )AӁ iDjw۳4d l I|?{2!ȅ'0CRvԔ Y!xy婔h0oQ>C~Aq91Qt3-8M{m=qں\Z.3̇`6m(癨mh]C1.Al A3ÆTi.ls;4 ~*=$~M+}i)2Z򈁬t-pm+,6_ :hʪĊm;PVIѷ!%JF5.{xDwC}/\ڍXc[xR{^ٿHV]w[]; S,޶\"ܔ\KUq^;:3edv`ygHʪ-uyq,8Ք4QSzRZTo τOm"ͅc*xl{ibCK5WXK U \K*lD)(c6Glт8c75@T.H!." AdGUW)W֔ ҊcjB\)!򵏅RHCZ-1ޞ$꫉ 3T % V65\U?!,m_(6bW3M6w;m w:Ү*Rb NSxS$Sڪ$*{W^(OoZ!myk)ϧ3⓼:~s UѷVɖdȕćbonO)knb<ժPT4(0Tz*C Øa9OP?=z[CnMa-Qq/ZP!o-˱}&4EԵv`XoAEQbXa`YRX#2y`wMPT 1p֚iVCO#YKxXpEhóv΍ Ou;[^mTw!-ϒUt$}nC& \S%?pYׇt%ݍqݡS~DS9$:7d=ޒ^S¸ɟ 6該kbG$JANղe w}UAkf?2soqTVMoP|9x19VU 1\cX mxub@!8 2s0}Z=Db"N ?3>,P&2D̒o\3*LL[?rꋶ'&bw&xLC(!&"{$=)G5_z` CuHnCр*F .fD7qE5 d0F0ܐ} S0nL!1-,Y{V@>t.zjԋM5&WIlZ{֓* L GB_X) {fWjhL8Lr(`F fW p^gj6a h7SE:x덢%\ӥ** 2 zfQחYQD&嗠޽ZV9n|m=46h~=EP d8VCrMI6 m p ô8:EY)k)IcT %N3~*-XU"LYm9KvO+47TV \opy@fO_F=}/n̼z , A쥌a?II6 QttaސM_|+Eh `w@#! lry97]7?k%ѧ[W+Wn4q%/Ehq6OJ`6K2ODn7ZU6*#':sg.gV޽\:@ZNVqV)ZZ}TCpm9+c?Am(Osgp:-Md?w^E P嚭F{D_K~O'I^ 0~"z|u[6R(^:%?n lYQ:΋[Ep;pAQ?˭"s0=\O2fׯ/$$`G@:Lq,tbX<~6(ACuOE~T8]ciׅ;H6dQ%v&<+rwv`]Ɔ8mAi^._գV꾳RQH_C=^Q%FeuzPO]ROx4gQ}wiCji<GʇKb\q[EH1n;"DR*Y#g:$s(* .a,`r /x4 ͹Tw <* Cо_R9}9rl`{w2BdQ*&5LxPUt '3&ݠ_' 6IC)O\w!k6)qਓDM 9 ),u)H I]zXHM:B7x4izj,wOOlԠtmqK֦ Hv֦$}$.Iу_Ouӥ (`k:.PDaCy#=c> z4sҬNU!yN 7`\ܑ-ِwu: ),ʽ](g. g&ґT1v60 ~3-'t 2S`q_ȫ"+7eA>QPlYPlsAeOm ,:y˂n.>T6{.~b'ϳ2Mr !ѪoUPo%ASuzp)i~nWoe~>a0 vN<_H>3gdB>$G(EU,~ZXUсnG+Gc^H#BG=i2K-ptc~vETU0 é%}"KnRpޑhG_exk{h9*FqUfp͓DE᡻J[gݷz2̋8zxA8=k'oZd> b=U:NhE& p({c5|ę *a,I7*Ue! w߁*ߣ\Jy$y[p1Y4\icCo(Nbv:&'i<[GC-wX>RR=L\9zSlTr+pJx6^k:R5)r ZARŎ~rN_Uãvt {&yj/a{kY(O|3\[xpf` oNi10vUUkoKgea=k29V<\H39H<*Q KD@c[p&{L%hvwJPvʆ,C֦V x* ywRI-A:EI1YLR8($C>$(#Y*t4%hVv>F?!eH[38:*h> eH?F~پkfU{#0hoWԕ܅Rԋ"7l0 F j,2w4`TӳE#{O쁿usm =xWQ^=/Y%~pt-Őe=;8w<^UÑ܊HIoy?y?g@*"2BKЙtF6"Pr?;JNSfo CrH}8 O#:mTT Bl "m+x.+xUЋU?asSE,a~ӨZZ}5yǔ5EJi::]vh0LMeJ02իG=e̮?XG-P7$4JnR~eUY1tF.8 1[ f`m|/Iqp_*0-ط] C[ o*/g]iyArA K30Z12 #_h1x2.338bIMT$d aja|gc,*tM -vivVAXcQ3 |SaWgkע4Mn{evDG.)x2tbPoNMq Zٯ0Rl!ϻqVUm}sB/W+U^:"ߜ! G{^,#T*Y&jg ^0Jv*4j;]Uۮ  \pqo?)\NG\NG?&U r%X:rH,lID h+碩{=zڴY=h߳}D-uGGpBpbsx>x?pNKZ:S R8P"Cy$k-̇DRFi:j4O+}1Du ÐxǞNa -N~&I&O9;'_C@Ԉe#&cp7pc:Gx,e\hֆ:2P*E};|)|bp.<|)0A]L-Ҏ伾ϋq4H90뜦E!Xg_(k')V-q2{5Vi`&Xf "hF|H+yZVN+=q߂IEMXZ/nߕʖ}|Fa̐7{$ۭ4؈^ny Gd(nf|b/mCx,{6%Q%W[\≤s(7~ĨD0EѺʒʇpnXu*Rc<m8 0Q[\[ A waxfs6ѹAaYMD"`7wfQ)|c& *L!3b"sչ8r66m'J 6q=à kkM sP+ %h;3+ZdR$]~]p4cuР(a"xFk ]яcx1w.#AcX{Z$ )S"gp1,gimR>(=H]c}A"U574cFg M )rKQ 6(qd'ƊH_Q(n6B~p\ I89*& %=6:Ք %H :hZZ 7SS{8k纟!^,8U6eSkL9WCtmų]pic*]5x,A!4)xIÑ" b}藝n"Xɭ "Z1&g# ?WKhBg/86ORE:[ >xfUs)\fR#bpcDgN(w1 1&(QN9댖~8tEl% / E:#sIR|p40J+ FĠ" s킣X-.c4V(SZ( $QoPlt(P?cboA0Ith*3I3xp8;t$6(08(1KXҔ@G)Z,:?f{ԈdAqZ#8T=3au)%EٍA S8>WʗK J5/ WoOPtAcd)t"!TlhE@"#I.8[T*j #FXTEu}8.f>y)-D"a%ؤ@4dX9I`Ωdzk^?#)Z4ExLIW8V߻bP΍MʅB ߖ+>`hkhEMT8@4 wJq$~gckB-TLΎyIW$8?|HwZ4 ͡?`s\1 uey1b#Y%ߟ^]p<@rQ%I_A?Ϲm♏Zv?wbXX }_TwpL~VR1Q߀%RI[;|v,N:͢N>e55׆(x`R;\I'x!r>J):}F^1h)RB$dsR(4>19ň^pOٔfYJ-`zn;ߺ-s#?9z,ZŨ0 rly8.1t9Ƭ% 0 +'+o͈귔ՄW|8u[ glk8*jOJy*:R;{YckĢ-UYě쁣3P4]z*Ġ6fvoy+YmsU*PmKG_vpTlpKټ/XzcVo]pbszBQ;a+$eh"aJ^Nz/t-0-A\ 4dD`D?$U^5VKF aE<9K pOC]AwvTbQSq&I+UT>!f!S:Uﻌ:\i0BfpZE+rx잦ȑnmz镖QvpֵR3!ZHTϷn\8!i)^tNa:M~94j{hč`c`p$AʷΠ\;H3(m]hfZyξsEQ>F%4IW?85Ј+[3.GfJzQr[mӈg bhOn*cQUT M*h+ufIWhzѸ(?]|35fvKXk58` ,ݦGXپgG=8C,:"B)~q^肣\JV#TQ(2xv5)^XX5"{hSݚ=Ye$ɳ6pPC B)m\xb0EҐ Y!T_{x}NSܺMdV! /xK,sv %r뎴MZyʰE[h)]V ~QMVJ_qۡGxݿuQ^t#eE}kP.[ N4;]jpp#paTۊyUs)\fDUG vY-_g%r״%tZU>N5GKr5'% ^H.9='aXEIL?+= 5] #g%|dGs˟/*kM9 IUdńJwq4>nyQN~yXobX?bY%|j 5"_"8$?8ix 12ſ 7׺޴-SVf3c;a^HN UZ*[ 8[%]ه'ROM,hH3KR0YxMLR$AZi>SSr3;(!=wm?z 7ފ|Pb$~8aVӰ1i^#]Pems>'W9snjC,U]6'U*ݐ գ#Ba%:y01P>߈]p2GR%"GGޠ,/F8FX:!E3f*:8 րy6A ?`]lhޡzSm -},RدٵṀ"#PJ QTB4dHXҀ aCne8?(o%>n6ObfE69({,bϜlQۏksDt+YɬI!+Iqh;cMx$Z OV<F;Eխ^m_Tr# SCn:t̎M.ƅ2@dh<1m=m#o!Tn҇$]iC5|a gsܿv%L$7c5Kg?&94,o&ߜU"6EҀW oON%8yK=h(H̢xgIV #o?ZXy6'G383c奈\. rYim)ŏއmۻy TJ<_in)gFbR KV Y+RW0OѫYi2ÿllã޻DGg&oJ"H"mUY$h+ s-.N>峋dQ#Q$7y (W@x` qUY+p\ȯBcC(OkQ_@ ش^xz ?K.G ϾKN~zc0 g׃.8{ G$ Y@wUJ||`^031hd ҅M~?fhA#޻ͮJr'r a`цva~ώ`| $6O Po8'^%T`D=Ll<|Z"%a1W2.Lkpd<s'3ۆ=@4G JN"aaaP|9i 1JGx! ['~?i:g 2 b_>Ly-;n 0>Wbz -AR E|p,Q<}!g".ZƜǘ^@KF1WyeQ^y墇P'z6_lo4xi@e5f*:+Y( L7Um9͐S[NUpPaq]5Ec_Ty*pЀp{4O+XMӮCF D5?L~8>%28=Wo|\4*|SVf{̜~n[eםon%K.Ջ湇|SPR1/~o3M~(3Y8r۞9 2ٕ̙&q,J GaF22.p kF!R0Aޗ[fMᘇ8\-iuMw*STA  e!;Dn*=&?R!,޾<$b⁸J{t@i5QZNݲUX6vbqls%KviNhK 6(@a:R-ZFOԉl ;l!b ƭg AubNvoP.Hye&O"pPZuki,4N]O2D; idnLɥ| Ɨj5&z3OgOpL|(e)#ԯqFu#H\맥P}(ugGM`lP*Zo:%DQ`ƾ\7~vJ<~s+HU/jv%hp%i/Zi/QyzX~Ab&[p@&C &k*8׻8aZ4YRi@Xg" d`+\wvH)ckac Έ;wJHzYi0%:df6ūSԆe9?vvj6Zj59D>9c&`pUK`=/\w WU10wFH(h]/S܍釓w7+i'0{\J+V͠+aOAUNt}*,6|YZj[̓!5" `kF=#|zCՖEGƀ- tk8ؖrA+Eϥz[Z2H*y^Y"rn>;r`6?g^R{_Y4zp3PDQᾀ|q9J>"zFAȾT&8H,_/MئI57lOnXXm_9b@& 6Ohe|o_톽_pO [y7ʷb76{AWI./Ic.?vp#&Ku $3uA\*Ru1$k|gsSqQ8jȪPmį/?ERF]V)^jݞx̓Ó0|"Pbmp Q}ant>?EXYn.#fJRLiL0<5H_L7AC9Q>%  pgd'Y>rt68a8 [ Yu@o}vZ5fؑ߷fJ.i:"(ÐS*4M1J$iRę$ښѰfѰkAk֌UpF0+נ5 4UDZpbZ),Gg2uKkkFӎiFhXǚrgtJ{O5fיK}vLZ IĹi``V镪lmhQ#h5B`Zf`R)*iii5S[ s)-[ |̌PM;jM;rMK߲Y=e#0*\ߓ0jpOpu@C:G|=$[/}}Ka.̄Wz_#7i@(@5j @Ȼ% 1WE⼀P1+ݫBK[EOw }3ρPH=9I["3ѡvxF) -Hi(הv02hN"}pcp:oK~wȝ7n>lucG~=@PѼϭ{,j44_%ߜoT'd+.ogr]/DGpCYofz6SE62T@4*6iԖ\_ ͆)fhJ65n!xn&jZu7#2!yw_MF\^o^5=!4^z.9`,$DǓ­A+ّu\_@NvOa?2{w# J9S ku-__W$o1>(ju*FbSq$ AsA54ac&yPHyH.y9|IyH84[xD檖I ?+46eeUv/ɴy &5 X+1M0«{GLnE)._( a lC),)3mJuf̥$ MT-պ#1zn) ʈV (Jȸ2E:u1Nʔ ψ)Kާ)g7p)f33O2IJ09g&("& K.jc&2e!:@sv9Q e(S罖:~PH w%ZT/m kQ#p}9.zX.S!"@w̻TIƕ2i*bԤFh-3#$C|P߶n>z<ƃ݄ФS|$;%߁nVV.-Ha9ӌX\ 1R,qf J!0 p߅Y[[ٵLkBZ1~R bJ%DaIuCuAZ; SڠQh&yeAfXkՠr\soi qb" p1K2VvZX"e$lcq2P%$7QpFdX(XYXqǥS`j`K8Ւ}}O8+MF8Lq"U Q%R })NA*uaA1&aiSp̷b@D`J^-Biգ얎XXMj c] m21'p޷gjFZ [cv(VsΓγC1 MU'>7cF " > (fhMK7pI&%uQL"jUbAqfa:N‰YfsD CÈ2,FH1q[L$&dPY+(&=PLPܫ)S=ߠU[֤VƂt҄LH;+] m3rR(q}RN.b>߱K>I%8.}`CPf.Y< ap_qz])`tAsrD?v3oDS,5aq> TJځL[JAK"5ce%WPSQY1@rgOϠe: xzYY!yu aGGY2MG@=u1w#3ͧ" j,NWgP"4*x5xRvSŎ6 i |)sJ䏳Ớ H*gCsIR' k2Sn~# \~ |A"B >c3.Alf/f/;ft+MAkp*Ilx徖ǐNGayªs̻4+el~;:BloZBGXfV}):>,-M<60l7#SW̃P ͿYy,ZR-NaIl 5a\^lvB:7si3HfCK (m@-A}YkR0}:sZ&\QQ00q{ $Of{`vfdV=BSK՚jAazJ2!k%=8S?~9 <ɱ_+ќ?~Ú :X1Wɨ~oٯɿ"8'j$M0@jY?zj*nM"bˆ/7zz>XY~QҪQo:\@Ꮿ_F0'b* (kH LO;K2zfƟ,\`xO/&5/*+f$N@ͭ)FL-ce+[sL?i;LP)ל&&fd0 &Y\TP/fcpJi1-pZ(/ͦ"Z\ZߓQ8H./{0 >ht7oOSPo{I@8^9B½Nj^;±׹-ys]1,׆Bv9|{n)xOF~k'?-U_/^C|㾂[<{W)_@惘2:94Ξk|V_WWx+0,>OLXRe/?c)Iְ}j5Iw~̈1x~=4G5Ǘs؀d>sf/In* 3Iu؇>Dc W4΂&2}39|Zr~@C|M24? Nph@KӨNRL|e͛p&YԎ=) חn6GWOe+?x9 +*r7]d·bvfExOY9uyo٣\Z"7k[۰iKqEnӇu%MMʗNzR0vIZ:Ae*$jXk?3Ib뮆yz_}<.F'P0NKa38&Z[6ei<:bybM2OLjX.|PL9gЖ0lyxLvsW|1;sA9wh l>k:brl~~_fRKוjc$-|VW[Qf(<K_|U3[]>ϋ'9n|X8]̊{S}TUz`sYXwSWͪަ[K&MWuU(f @r!^߀Зrx >lj&s!Se:aj&>&< Bۃi+׽U5HK5x%.qrӭMA57v.BkVg}Fw:ڝx톜5k1LA0L(@GEe:et |s)}D< ߟsͣ֐ή#Tުt08$ʥp)B cFcGmu0 sQFXp408nk7)9!{Ėl+#ۍhi]1tc,w@AzlTPMPϷq 0V76DRzO }L|ZRGO_A.=O<2IRMZMmߧ?Οy-m YwDLj;)SP)J6,N/F?Zcgto+?.1w#Zc5`XJcL#غS|Taj-* /X f05sw:ِK#)NY2c&%A)#30oվ^j|]]jofEN Ɯ0b'8uW&kҙMRXݵm5// tI4_?r:@5y9%%oY̸3l8kff8k:buqЂQ׈eN |IyeRғ1 '# -Z>6uۼ߬}ercB-s9AR6E*M [{QY> {u㻡qqQ2jxO[7^.dp'^0#l:H_2&D0b:z'goF<<02n\#KE ok|ym JYADždDi_#d 5j\vQ~[?I.%V򔈵80&Tߌyp`\uxC6~Z. #Z87rbu^*3bo;uFZ>-O_oҭ{ 4z00X1'#:L109n\kW! A_`=;pa!~#)b\:rk|6%nIYY1URMMm5<o"`kRp}ꭄDF}a&2A.??!C-kmګ{ [qL+Qmϙi%oykbD!yC, V5Gm dA<;,sM,fpua!+$o!;kvO\V54N'JSqCHZ>UؘAm"BP>*U` F\/W4+N!qὊ$+!5`A,$WF0%#:N]az| mtNԆL'(8w! i`&'13M.)*RZb *'fY%}/QZWf=OD%8d=p'm܉'o<;/n\j&k\ kY#ց4@Yew/ѡuQZיn`@UTucT?֣<Ӓ ,`ʴrׯQ1 [>רr9pwݦ X-**\${_*ׁx )!Dkt^7uB5ዙ%n~&Gm]rǻ* N@ IBljWcIJb ~F-i\ݰz_ ۧs#LXL/c1] u{xqXzr$R^ë("%$ Ԙ-ijt7AKK/ [2DN2Ogp5bAR52ٜD3'Z7 ~eH!t91G-kr29صAs h֔(49/^r&b?Tb4 }ڿNCGT؍v1\'8$@%svw|n iNUS؀.2'U Bˠ5n\{/Τ|cg e/Jz4pFssZ H Z;.2f'zRt!EY> kߦ>C֫0!^+Q5I(Yχ]/)8E<È|oF(LbOA1_qCt |faïk}+*YW,vcYK) D2O%B0D-gmNJL3RI_xh+˄^Y$簣h$S\AF|pf]̣C܁Z>׺0;r-m;$y&W VQ)StR"OJoı$ca:1'1ڒ<τ}cw,Ҵ>RuF}gQɆ6ܩy LtǍJ5M> #vHg{>=;+.w^Q6(=($V\RqqRn{{9Lۈޔf]Q÷oB=8 _ `;ZX|yt;PZ5LSƬjگx,i 8jcJ g N= sCZ>ӴcXQ}m.ڦ!k@MFoljv=T3i9s.&2V$(]Z;Mg)FBɈSeBư|'ǴړzWSJPNݵU=@lqSJ9Gtt<Q| q<[5_G*('X/t=|^OUnA_#:zn[5m8|8M~gOl LRēÆm8c_ p☦ LԔFp,F,v%;Lh #::$[i3R P85g+B>wd=6O0q=9xv4!4$m!$jEkͬhdDC5[;a~DG43SrAlK`?V?M!H.ݤw HR$Ϗ:a2>&\m&#׽]IBǨʙWh5n @) tFjè0ƽ)@<9smp$Iټ:ǫ[6!ڗۛ)s0=#::vc Rp5VK#lv3aj |ieÙ׌41OZLމc=0[>׸R 홆VR.q?UYE?vQ|WtF.7y?Ft\f q(qzzh0>gI:1_ZISSr+Wk@S.1ǔt<[lۼ߬=|W P)#W;jG\ E ,3OSE<8 {pk$oV3.6ڙOM:#::Fڦ̫"N12DJ2)֙_ v[03^uP7Bu59_n(&T Gtt\nj v_.lڐ 4oB;Jc]zUj1x#::3ޖ 1WR Tu kxU6uɧL?0S0:J?~x<ӹZ gyރ{ʸ- I`:qQǩV!cX+Z>$D:,1\/";;dmIK.)]N, YG9B0eZ>V5A16;#rmZ% D1{˃/vWg5:g=88M d &P\3* X["2)8zH'g;x |/Cf|r=*oi:4d ЦaIxE#ڧU @2|~\x~un_PL-#::&m1.7 żNFC!S5J!!T q;S>98)2iJ&o96֎/mC2 AUه⡱Dn.G"_o&و͌] 횽>&mdwXfC>~Fd5s.ƵTX(&MԞe1 Qg5Ll^ÔʹJ4ßK+ՌZH[ ETsDj\6zc~*$w/zBȂ15q7DvJV1$k[l5pRE!Q|9&|c~jDGǗ!.LJ]h8!FMhw8@RO MP"ös3^M<xhhTFe̶+> }U􈎞1kXZ%Byݣ,e\&B#::-kw25r(YSxj"d'bc?pTWH[>OB8l"j%p<9-j\6lz t WV5<<ڣ&] pUn,Ԉ3T69g ;e w@"@,l> p0QnS\fl%ii{1z6R~kUP:-9->o֏ܥU$mBĢ#+dK]XgU> bto>¾ |GUzF/ڦ lPW6IR^9D )E~x`}ޔy/LF/l:fE>{NRJ[zE9uΗo 7yO^CQJ+s'5{K-k0;eq{I >pB 7h-6: mG}6[ٱ]x% #d_gG)_byzE)oEjVY3noi"hd_&S.3JW% 7opY~!|պL aF CoS'9e,K8*\D 3J1lmWϛ:&JK["J۬͛yB}ܹ P1KR}\|ha7=&>!mӧ3CưkX*2{Doie9x*]em(<HЊi.g.i]9XǺeatKOiT11>m5uUfh<|V~4=Hkӧϗyqr,F}YlG H&3EUQӊ1c.1eFn$$?'3(d# 0V2,e6Le[jk;:@Ow}u}BF>RO\<@Ϳn!#'h~DyX<atP"#P*4٠XuJ wmuosފ?%"ёxvȅ dN bh,?)kRՒi2 Tw"= xߧؒb=(iG\YVQ[VtrqmUcaqw!<*a>:b0:sVp =G2 $|٣A0 zӸ ܋9&  Kf1{͔v9xʄ݂L+(EY@N׵C1fO~kSe EwBBW| "az}a F3N\bw:ڡ2QGj<z8Z3uG+<v.C":foۆ^9RŲ=e6"p4Ycɔs]:Y[yAi3aAnA!0[CR RaEXǪj%ɯ Aٺ TxA)xKCTʧC8}:>SB* <zaѻxIHP#r^ ~^,HoTB=R Q9. 1jZES|q C߭߆^ù 4J}_DZH%ۍbi3FgfE4|fCm(¨lini\*CJlB< ,Q̼zq]}QQcx8kޗha10)bv1-kpZ;֕TYanU#XWc3K(njtK#]Ot$SX=.STijx[@B5>pFHxF;LQm<\#7ݹEX! Jg KQI[sDS<ޢ^gvp(Zoo/C @/gwx@iFm G4r8 QQf7ĸ}hH?kkl7/WW IG32G]О[,Osz7ͥK?C߅W)@¢9+¸D*ʈuYM6wCFRxOF Ls^y{ޒꘃzSo Q-uHv txE3NR,*ɌD`Eh4qQ$> bK۫Ck.\Ws8u zjlVr[2#_-$֥'5UQrU6 #votIJ  \xj>g ]#D&h<#uҢu `p%\'VQBOVJǕxGSISd*7np5%ыd .W4UU[SIy㙧fE6.`nBЍۊQf:ģXP[E=qM.SqG|UӊQ]ۢ\h+FF-bEoP7.Gh֢/"Hђ\pbp,w.FZ җ4pl =Ⱥ?S Eآma1J0괠kmqT՗N/x9- N+mjBZ-dIE->ߎ_&c[Ai30t;bَMK"Wx .IKA{gx{:,F XA~("STl[%b^^ u-MDp$(r H+'*z^Rڒ&TJ^뜭M_o=C||icK5,$<3tFo=,Fˏ1ޠ`\Jrץߠ $ z= N?F'~m݆(_2b#5WIO+'kH"5BN~}y}} O,FFGħ atw~zrCo}*%ktMQMI#R;M*51tmɪ15y oC*`. ˩Q+*h@["L |G֫ż_ Z=yuML nbbB+0 -g.KoRV"6}ܖgQR1qT"_sjGmϏ`Eh(a ΰޞ@w=[ KIjk_4uނ- ۰eELN 4aNkDdpIw=ɋq; Gj>z @$Wdᕸ[zY08w/Fi2AAGO<_ z`Ҩ$T3^`yΨtoZ:tySy@!izgƴA SHc9J=/k_v&1Dm&6 XޒoHj̎~9Y@VoUxTK*%_5"aO,qp"gQ~-*e֢^H/jb̒%BGSea:-CÅJ+_vjSãC`.G}(u^rJt_䀵h1Eco,֧2׵xΉZDm{oG/2(IJW.}hj>it64yZ1ߪ9'~EN9jStNN9-;\ub(xd:r0N/ QBdL`E6bhJ~[DX'Ø>\`-߻\~n _W3$eHE&+ ۇzbܷV| y ~!xz+T(E` ^7U+kEkb#% 6c)6BMC7vߗDn/Խ 4Z/vrv;p3*@2O/ [xo^AsX"ejJ&G¢ߗV`'{UIːu{XK(^"ջb tJQB! 8@.YݕM8vۮ/}?X]q}\> P'Eߞ8 mI<(ۄF ʿ=eg{4zN%,HX˯w{5*q׍&Ҥ\JFjQ)0)h-lH>>_ ݎQ׳1G94=һͬ i2ß_-=yov?}^mwk߾̷iKM2ɐn$Ѱd[tKP{-Yn=GO1|jɯ q;K.+g/fi94t>%bٯhO~'n F35.?ƴfҧmwjMJf@"}_csUٯggW+n+avC~ZM׳lʵt(P͗8flڜиIsA8!5rbok[0f4ߍ(Lc~y|?ݛ|>(ފf4zd&㪫0E:hޝXt\Nwع` v >ኋ7őd )_nCu7w'{u>7"B-8 ` k3*F-,5L-K%H=(V@cf{Z|X}X7) "͛Hb܍%M~;dm\ -f|ּl-pNTwŏp*[QkWܠ\ֳt0OngRؼ:AKfwY]0-`cPyZ׊7k=OIn94p[kRWoV'm< u" 5 Fe@VJ szۇvs:׶o28jZ>5&MPs+칮= ƈCzLaNcT|#`QBmJ|3dlQq:qvE_LCiXX JbXG*.߼92}d-r2`L Fk/2Y,G^c\E1`XRf|;]V_}({T<C9d6U~):!Z~wbh82"%++O`L=/"7YM|6}W&E<&l_}6K.G>cxgtȼXՉG)R7zX*3Ehz{p*:O;0u,A J-P4~)WsBq3GIOъ a٫qj1ډޏtִc)l!{;vB! ]rqN꫹9X?δ[ڧ(_\x^81ez<6vy3t6'ˡ^Gjs< 5*uFv.$B.LIɮ{\? A^ԉ+VnoN&.fUIʈIͯvjo(?m8=FͶKܶnM~!qw<}GRw%-kBIxׅ(Ӄݠta-XO>Ϧ,S h'/<ë~e'+>to|&i^i}aKf6^/4aeo`w2mE$Z/0M@߳vwo~..Wy86cOH:G9Z+=xր|xz7ssĿ:~_͝BgBʘ|,>|n6ZSo? ưram=2k.6H@w EF绰Lu̳mF>Gur'{Ɣ7~97Og鏷vK[m/]f˕ gYܬ`v_]N{?qz[*\>RqwF?nCy׉;tlw2u/ l8Xf5.F.ԊpI "I\v#0b<9R1bSרb s0s>Pʱ0vb{/JB`5~Y,>+yn× e.WO3]R#˂2xS=MpHhP#$8X-64YᅵhSV y]^}Z\tK0<7#4*gv 0pN}"UAa%y=~U;kI+Hg0WV1 [–a qվ2S'gna)'!0)髜z)^2`s%.(Yܠ8 9 Q >[Evi.%vsGa|27)=Jx.^Clߵz@0 B Ls2VtϙJ2HXxՔ +ON( RLJOU25g>bzC/8#"5wAYdDkӋǑ.%Ep7ZacX\uʽmb}ifmm ݰƻΏ$s~3Y!":瘭IU6Q":YRJFJylǓq;~;gNjwKBRŜC& D* ~na.{ YBg8#É` `L].D)1!G#N( T~ܢl2v>ΤO,? ogV`1ƫ" H/@AsN( .M;]NS $] c&*Cg[;u5lO|@( cR]&IR2'xGMU0|A+j^VTņw>S4FR)񠩏42(XCX 5y#+,Hnt687䋢!;L+c50- NyamTb:08n]pOl {k$j/g!>&01v0Q>_4Qi*}0bo}xMpdzF/)owY݅|gB% P`yF=Rhţ08fII'XH ߝQ 3xI "uOٔ< #5VHE}ځ8HFTŁ? lƷ'SӻR4,%G(*Ԛ ; [ as\YF0`+(PV F$ST:k`U4%T]*LR=A Lg3%85oŗ*}y$Qns$|$908nU]ćq K12<)J,Q:^Lzl$XN( o6Q - : >%pa2ԟ񡶏ӝzW=J0*"Bsc$:O( ׬#Tg.->'|baB*RDd48Po_-׫0˭j56dϛ ÑGP 9b1[*fvUaTEV6='*'xGh KL:08 Fhȳ1^:C@ć"+&[uRJ)\{R!YGbBu1и1?84D9rbV< S[EHJE'x'C0-n74g^aU}WZl8/9xTO2 \|E2@w-3SL;aERocI{(!;v"ےҤh8 8@+CFAr*GiT}h8ϖ߀vZy/i8B4Dnԗ=Ԫcv#ufB5 `Iv2lZH3aqgc-[4Dg {%Iv2T Tɲ=VNc @#lXA?&y{A0ZpAhu“y I6nplV|S|RͻO/4Tqv*p(BG4>#V9q-=Z1S;h8rU1ZWbWFљExi2*drljcăD-r( !@[C(MDeqsFٿWABi%ك G1FBtqbA,1wK.s8d2p V*1IñT"ǔH$))%[4/l2({d pd 4JMI&~G;P*qju`aֺx-n. &K[nEW;h8XuNnX.m $E%[^|0juWCcZ;nޏE-C84%a߭[u! Zpfߑ|`VՔ *o豖y)o oߌ8vysae1FIPBwy3ÓRfBL{lqv$iq"g0e4` gw4ukd{-Åo891왽a-%yv"jG;}Jxm4Y߲"\]Cy|}iNБ,?H u%D=X,q-QT\^ֲ e4\VNK@|8R MT\>*:q(h݈, 1$ܚl Q+hP͛74j[4ex_)j%y1FQXVu)H KڃG"ϯ+#%N9=y5F :vZ2@OӉ0#wDIlIf0;;{]зփw ( o(ybl=h0/4Yߌ&>8ϧ[ד/Hd-멹X ^Of}[?\u%ya {oNgIy9^C=qvW]?F'D|Ŋ.h黉/WnO]oCrqo(][c>p⻘Yۼ[~X p ~MP{~/˃flà%v-:sU$)-}?{:7Ovק>5;N>_h“3P73kpPVJ8$\ %fX\O1ׯYOޭ53HhVݬ?pd6-g|w%l`NEʹͅFShɍA. 'Q6t/Yw˽ux9Ztm4n+h@ %ea.~J#l4ׁ}ᬔ(ZKar>Ӓ臭"^!9xqeɩ^/Wx mǵNfE%+KUQJ7?ݔ sHl XV3rA I,v&yR羌˜%8sɦ_ɞjb0HŇ?q.;ͻ|XnFxQQ$ٳh="E }>r:,J,8ތ)tt1SL B7M_`,ԌgA?_~1cT$!>:+*[7u]{sG*4weRzP8Mg%|gslYl˔ ebϧt>WgX%'Azj--.OBTħ+6 a^auVh_qLf5mvl/m(W0Q{Fa\3'iBLyL if_˜biښ[WW[ێv!#3?Vm B,r[ [ppnY-Sx,+I<-]u-] N t.}EJpKĴpKnn-]!1 ),R<O!SHR<OhO!S !SǪAHD"Tar`jv0MVfC"O(C>-Cχ2_$ɕ'fM т[$XGfSSn1^|)erހqgIq0`S8#T8+O6b=;f-%Lp%X1Xj ;*ˬ3eeJ`k8T))hfNm{7Kpvw T~xm4_ˍlU*vI 3Qݑ)UX{_ …D£\3HIcQ:ޞg2ǧ!t-¨ ݌xTlJr[ Pup)Zٛ4PGI 8SWiAMP!T8CEAO뻗ՋrB%,wvV`:I'g6il5[s]8l |ªYtkKh޼ߦ~t˷#33;<[t!n_˙>Q8qϢ‡(>mO[P&P916kݓ`I k幋KBz< e*?,~Wݴw |^QZ)0\x=}H3ljh*I-wbܘs laܰD2m[xjpCax,*_]2\~)q1اU :ÇʵcK髤wr; }$^}[]qj6Ƃ,FAo޳w^%),~ \HFs|хVNC ]:79]=4j%Եc>gxSrQ0]/.Vηrʤ΢ߚŽ獃Qp8Yyfbْ[!nI &lcSdCrzM@t?nͧ3u)=cm1[a8 ÂqO6I%IES$Y0j0<@Z"z!$̿?u{By %EhENA a5p ϢW#iܟ~DF"8=C|MFӲuCzX²NLD[$+Fv]q;Б#B }ùIA0h@䟇ƳhM'?u}(/Z\ZXp&kz@O@U^#*#R&M a'F S~wN=(}*F-(?nqf1c)KtZd R"))7Bw3E+c[4:rђ`}SP* dAݛ}ó ]88QV/)ʪI:$YF |rT% Va!m(VsCt% $s VÜ;ϸ~ T_x wӝQuz{{e/~;j{>\zԟ^4NV .w5vYol?6k[oa (tljC88,rcxfqB5#RL)܎׏+{-{)ܡ"j ؤ\j0t h!> Ik!n6I Qw `Br$zUIWBTg)Xhj)$u<é,1h ZC2m V:fH*@@/%@02cA$ăOgXcIK3 G: NQ V)y2 UH1)F('r&Axd=PCY )%z7N"S :[jԟhlV zM 82 Z!uhV`c4 !, G"J@G9GfS)2䩭>]}!9 (WaWzQt聸V3^݇#&*Yp/Ah`S:AhS}o=0sٳ6ڮA"D~;YR fGX2?ETbwQ-.>!BWkTR P=RBʘ2,D(cʘ2&yb(cDP41 N(cʘ2W$Q+ iBP4pi(cʘ~CG P4Bi(cʘ2i(cv(cʘRʘv%H ,T jBP&T jB0AW uBP:TUCP:T~ILRIX!X!!%3B%(nQ !OJ9\9t FN ʮ&T\xͤmOgC9,6X?m78d%Ŕ1ՆT{%R8%Ն` A!6~\Td`ĈslF8=t7b7!v0ӻ%]m"}HIY !QGǪ= Ol|K11bTNadTKgښ۸_K*׮G]5nDEjHɱgkt7/I6,v&Hlù>>_k~sZ3:~y[ܕcVStkOBR 6F1e^ŝP$]`=xX"zR ^a.jLh3CV`8sx1& I*CR08Qh GwW]gy&wq{ZV0Y o=Dm ^F-w[pk% QK铡ޅ|0dWsF3㥉 IbI(ELȇB :! %䧷˼A]_+xdJ |T 7q"d?"I&dZA{O7) kh-L妢J{ygk })X5'U'4yۭ-3z_6{y{vmi=aw$.vto-,@C[+W1la͘!pl2R|x} fjW:1 Ii ^Udǀ(WQir[E\q槿|vgWO/O$T_|}+p0Ѿ)}~Y>464i^05Q ^0yϸzܶ 1ʷ EY~:7he[#6^sLr!kH9+0mP6\l&K;b~V|{V1z-R9-"jnL4:Bx \+8<ũk+s柿ƺu>ޕ1tD*G=ugs}=Qr.ӔaD18F +: pYsfƨd8Z,[-O&wt\^\QOY'9 oXb&v&ak[Eh/io=й[uGHrjR٤Kti)ۧSOlN>}:e+DbJ+mNΝ®ӽs{NΝ;;wwtܗƆu<< 1WWٺGQ08XY+KjQkk˧NH Iʵyd,(oq)߾Mq7Dmd/'T,򂏲j@_&c*.|zI og#:xZ d F2Nǿkaxx9ČjhW}g%EMx!b(^1x2OvqVr3=]f^-CW#;A>. 0s…70KB؝Y%G;=cVƻa9Vt4i\z Ysif(ð<8onfylGgkY5:1eA{19ږFUӾh|G*pGkhwVNS<ԻR՗u. 1S#9yKg%8`T9z M%Fbd])NNYW0^Iw=,5Vp$AZa:#%b1?KՖ(b)!v{^a^|g]G"L[{}Ͼ vCPr*%o+dON/]R弚Kҥ_RY>zbROo rlpEӝ3nb..4!Л %4۹6Lg߾(7֑c-Lȋ{+^goM|=Ynĕ`c6BYoGCa:;n06jRh<oXoA].GyK; n~x;W-G@5cSpєta<\1@.#Iqp&F7xq~jl [hl}@xWm=CYIt\lUGb˭>m,>lua}}|,ݽ~6ɋ; g]6ڪV>KoG_ _ ,%cRD>dJ|sGd]zq%FO'^B~&L)cb A M@ٸ 'Ro BQ'n宺K(c^dxc#sĢB\@ Dd2;B 𐶩cB1Q@BLC)TvY%N(4 aL"[ e 6!/P6 q9b 9Np.&#%j79+8 ' "`F5U'?K(T e*Z+DcBP(Z:)+(B3Ĥ R{(0x%jgV!@IA߷pY) 3b Uu{TFd\Ŝt%j Nl$7aC~-X2ѓ_@xAUm*_& r2 Dq(c=?_B xe0[޺IxnJ\zcf+\DU)*P@_c& gwqsSЙX!q^ %*WIFsA6!JS&AXB|eBF}Ak)52&qzWBx"W?Q Ӝ[Jy`zQBxJqʬAj:'(20!Hh-P<:eX`1g$+h* )t\du%W[6=f1JIAJ 9'sX T 9%j##а[x{w?ƫjQu'qM0tXgZx8_lP~0^{1-ӏTHʏ,_ I$8"G"gϪK\ȍI).pnSV$E~dvb|ޏ˴*)(/>Vw_ǔr ~,g)A3w}O17d]㛫5n.j@}kw%/qt6(Gњԡz`$dRISNdI1oݤIB?gUܰlTc:ن^ 5T!Yhҗ<<_w02ѭZԕ ¹qQy#R+nz:vwkXkc ҩBؽ+P ~>T|?*+rKƾFIΘ~&U<;G>H:gC4Jy|a06^Yí*'ɢq)!:*o:|Pyktl"oi fN$cրA#E Zb;5^4X /qv֊vBem)kP'22ƀ)dV\IF~}}0ϣ.Du_kr[81#j9 rFgd Dr?rE<&G%,8_Ht$+y[Q ĪJjOۛKER77|7oiCB|7s W rkXz!YoQ+Ɵ-':uۺ̄tm;-l!5JՒ#RVo;7_kwߛ'/߮>8i1 8u4[n/ƶ(Q~rO޺fp'qXQ'&!tU 뮦XE q:רYCʒwtq=Ћ:ǧM͵;UkG_ %\_urZGroy?_Ĝ5s-f u ZPx66 ?>{ ߽x/߾y̛yu#lҽw5C =תW[W͍Q4Cz5=mtz6h~ OxgǓYO/޺Ei/vVd7 EX j~ISWEԭVviC`C^cvCmm+'{u_lܰY6RKF9Kq [IlDQFnݼַ Y A-iB%n'UAP`*atJHcR ,]唵:9x5!".}붎dctv.7P& IE5yu4OgaPt^'*^0[1y& I(yXoP>Vog_KuLSB]GAYs=uONBAٺ\c GV Ηҟ_3Oo~Ȑyp(&WP?8NO>8$<8q~5p֑L=Bnt3 d{$X2}F֡?O@}v@QCC`)Қ'uB6`W6ASR$۸%TIn:m`.6:/씵dBml-k0*kz^Ww⮩orosRtJDWNͽPQ6㠛/u%| vXſ翞@+vLߣLV&mmDQ\WvqmGb =!GeE& d*NxL;S Gr '*D\5DO KIyn] $IY餭M$BzF<5ĪJL+MxbYW:sc˅g7cZ^\~R.@1=ZD?e~=?5Nhkl}GIfusv2p#2{lo}[|EZ9ǍQc:xukmPɨ`eEuUB\v .;($^. O멵O7ox# <]Coe9ΘАi6++%02jNBCN+atTAhgV*bT*9Yx$k,΅HS@XC|N2j e3򙛳;隱|_6x` ^xĎ}z,>&?MGssD. _,"m"%Bٜw0k] <˂>ri~=8}×::WPm2:V D33 ti̎~Z[5I jiHfgT0 ?E+b!t|^'vrj(.)OLϒ_^:q+1@Caڠ5gR əD{TTaYnnAԾw9*}Z%J.^7)0c@(q( \=7shط_qCO刽b/Y[-[mwgqvm.|88qqr"~87aDA}&xb>[6,)+)-v햲ַaΗI;lfMC5?-R'?fuil6׳kbpU+LdoR\End-^k/LxXWO~}ʣ~Z4׏;tѨ1TPhle޽OyGhT|GMS3\s]/C|^ݼ:ɢ3\k[ !t/`jsҝ⫳GEf'=?{kc#򰆚cZ0>-X\k1lm`'OBj x f̸n:zFqb)eex /2\r,ڥq6_Gu4/AVǸyúeQ6`|0j_R}h?`vHovHovHov!v:Cz;Cz;Cz;oBRX{C. mQ8߬_+5I)S6m-K+,)aMk=zC3~t0jN&rp ٩;;l p,y̍}\IqGs}Ǵ2_BR_U0rPZhlaNF˄eGWfpfyu;uT~ܙIpwLbT5}QhD8{+cSβx3^ǩiृ,Y}v!g#韖fޝ7l,*[GG~sq蠎(uJPԊE}K >gn|W7fյmՈűJ:|_4DnWyV^׺oue *nxf Pf_ԷÏj1zF{;i]t"loʼ?`q\g{e#ԃ!LOIq̌嬾T_|Λk!+9XףF1f8znkGܾ!^`;\톏nIj^[!pWJj16(GRC 3[C)Ǯ&-nJ/֝{|1RLdIdY;>BV Om( mPՂ;B[uGt;rŭ7>=hʖTkƏ,RNcwu+:瀖\2jԶ@ I( Zn:y=v5)˜;!O*7q QZiRV [/ ^F7:{?&}a9k 3rk ߌVoFɲ` ?)0:bqsaj~k›Bp[^ߡ҇bB X, *T,F{/D&ńR E0hN'N JY$4Gp ,&-dD%gIe hh>(mhG[ESYB, KSS{7t򥡚; 4)u$:/%C{FqUoySPDMKHIKZFN H,H9`G4j`[O<< F=;RJk(#®M$C`y`+@iP=s$9g J]1AD-{$6JU8ֳS_MyX^ߗ!EW`):ZG0 $y 7…uB銐$*>ɿ:@W uu3˿4ZfN .{Is5Q)JMZQ}=U0IG5 Ih']Ir{_S[*Ü Ub V*@6PfV# 8RQ6@ ǽJm=L<lF)+k6R45F$BSa&#< R 6GږIm}QDd_+߆㡇3$8& XlhARRLlh?zlXggv &!jX;ė=_ջQ5n/^)W+.9: SV|޵4z$be)//HٙM$g~8;B}e`~__0`=kꧪQ[H~11eEA^*|w0f:7F/_AjIږxe6 GAx9 jBLvfY{cLnVj7SvxC]gaqDi=y3- s{~kG@!C&HFG#i l~zr?mtz_v<+m@o&pE8gQ?v7"Zh؀yhnQ5+o{Ĭs~5jZd?Ag5";R1X]J\T0dDG I'$2c TFּH%h8g1Y$B$Ėb:Z2 &: #b"uHI^s 5SB^}37{?WYs;V<"7Mir:KSwPQ öXm?_{7].wZS 7oddL׸什}n()&8㒉SL62864ٿF$+ %ʣLt{w9h=n 釣iQ4lr>gǑ#L8M}Ӗv5]N]LgfŰÃk2T?őa64h%`>IgGWPjRX]f[w?'|߾wfgJ+8qy?<[ivQD6u`lEK-ɪff$[̢|$Ei8j(2$ _?58z Wb3:.4ӑ0mtܹ{#~yo4qKA_H&5pv/O?8?}xo/ s>~N P.VuM:nЁMO7-;kڛ5- 4.G=5];vo֠*z?]vǡloʥթIVq]c/>B&5?{am+XU8bC<  )* wH$B/cQ4eu/-qF=&CAFKvHQbV|YƟNJ=_QTLiF ֣Adi- ac&4C`@1FS{GfN",g7Z!XNs̖Vـt)kuvJKK*ЙL"z ]8W`ʷJH*ߪ*ߪ*ߪv[UJ[UUʷ|ʷ|ʷ|ʷ|ʷ|ʷ|ʷ|ʷ|jUU[UU[UU[Uս|+i[]x#͔G!>7`h9 2:c<"n'n/y$ LSN"3ށIrLȈ(XT6JKSXgiKcrJm=X"g9d̀B u֝A̼: rGy rBxXbDS%D%(7)!Kbfa5c݄ )kJA(5`ld*cJB*0i$ rQk*ɃgҙQn>. 2s.m-SdFg]"EG/[|^*赟F02x1&2lvRK Ɏi'tD!gձұ4o nmͽ;;pK@D&u-aYKe-aYKXe-aZ²Z²%,k Z²%,_гc%,dnJQưP5ƋkJޫ佯'id(FM 0[-]4*fިӆi2Sʑ+IR&1+ hCSZr?uMH;_=tnkZ:_y~ \Yi٤!<ʡuTBEh9 "c"kLqT,yK҉#sTHWLsB5g0~y\@)3;%<*Z:]91 ;+B7`,졆ܩ9VXB e'ANԤBiRyVeR 2`&QvX:Κz{Y0Jo O0U2L,EbuR\09Y"7&Ay(UT/$jnk[4ZN^4!*^q&m֊c /<0XZ :Z|Pj'=!v|·sg$h\ I ZXEdJ"Pm6ʇљL|!"^6in, d9&fcYfk}`Bն<"ԷI) 4]QD';Vyb58)%S1 l"&KFՈ^C?~tts:q7õ{ryE l+{46HH]%Z He|/]{LB' m0ٱ$0>] )H≫`XZoHbc5Bn9LVJ~_{ wNぁ2wC~8$%#_h6 $]`A :cO5^% n\"?a"^cJIE4cbd gYr.&Ɯ"R`ְRۜѶܝݦ7v0 ݌wDуm'@[w{寮׆@“ǶOb=+{"t[÷0M87D+%fH)}*H17Yr @WdU]}ΪU}hՇnbW]jK0*6=3a[֖*~umQn}ro2}1L53}``.y=j 3 * ;X ڏd dGBGHuKnJA"0#,Qk)PJЕH7#$u c7Tgݹ E{Kz1B!1V_ge"\So1,GfCbZ3-wj#%F2kL'H ]ﷶ+#B-o1O/ qa0XMʟqjF'kxيG$MZ2J9iy.|,0\H;w5]w̟4L\#ZGdbLט 81PSLp%gl"e8pq:Azl5KrX.12L`̂xA dzEi0sVύc\HfD3|j9Dprz5\K\I|jE]οЌFs+>FǗPjzܼV4k]M7#luNn8|r]'ødS7ɇJ{9K"b&7XLn? czl N7u#6w3 X2#Ai0lt`ŲD/&7 NnFnu>Mn+r&Xǟ2R:V>B1\!?pzpMїTO>` Ď_݋W?<~:P6`qg~݅\^}ײya9لooa.>$~{k#@R~>bꍧ|ӟrtkU+A#D6ic-0_ Q!Hn@`_U9tD+rovwZ 9i(>y1 ?t0 ^~FȚY,ɨ"Do#7PrHw߷Q}LF +Adi-%eLO 91u=LuH59w kYg!EF+Z9 1-2tikMvK!T+P{s5x"9wr2xvVoEp?V>zo) I(d 0XSޒ!"ZbX~e,D= DfLJY2!cp.l* ,耳4%רuQ;c,SBx`FCJDT.E[}s;蛛Wg~/\i\Oz9f,Yjc w,F**,T-J:))R^uO¿sII2Ͳ(Hf#Y%%T'Ix I<5 !  'Ga=*2j]`>3HVyZ=$w};w#*R 0Q c _cMd.RaQBIv8齧-Jxv2G26J=ݿ[[wA[ /zgP|$w Gb4O,Mߪ'A'>i+r2y8>^y[g wyfLRkgHB*?4e1B$XKX&Iz!&霹,#e\+i-cʯT~9 dۜSyI]p2yP2wl Y!H`97^@k;D-rnkME=:T_˳KkǖLbo t˦uű3,' W.3T1Jq,yKܩxWQA,Ink[Ϫ== Pk3m;LY]9*|VFBGM0ܪ9V"2NK6$ deR B`1%Z{Rʖ5rvw}9ie'K44'Aȣ#I^pxƲNOz)-#@l3@K9S*3לI*H{υP--zZZaNzP]2Dh_B>@%GkZH2\`%-"S6ٲ-R%maŕ9j/Y’L/-HD/9F!KN*D o+:Wcu]t!whY:{t\/0|5&SȺz2HM)0poqK%W˵τOMH9@=s )6yY$QdFA%8ۧqjmq=Ľsp |Dlb,}2 .a}0 d*MLnOx{j{ygs.-m+oJ;HH %KWߨ& '64rmbMR5خJdPK킥Cb}?-&c1%HSыl0Y1u`5BZv{ydrn][GZIZwå#jdy{GS7 wi}+Mi~+Mn:zt$Q1N$NҤh,.lf&s-. LTޞUFZ|O:T6\RW`T}c.ͶCZ$i{AZ+r?| `+evLc Kj͙0DB؁ n(w+ÍQBFT&!A=s+w#0wY1@g#Y&rBPRhʅ87+H1H" m'iPGOIcY8"U_gi r.Yz'zӇ?U[g-3(2rIh|FӑgG,u{Vwޞx6=r[;ePbެ}81@¾Y"zȑJVu:;<9^.뢤|<*c*DeB\~F@b^(#cA/ nP;Mm.~*W j8H-pn?Wƥ师2<.j)Dm~{ggjfgΨsj-9H15:oxh&IY|r40}fr>:^}SM52IaEmAA<<^],/H-Ĩm[f-}o~zUE[2>К6EqрS{,64k.M*+7̅7 BquYr c*v!@ L#5(Bӧg\`gӴ{sӘhyߝgR;=zt:]D'ҏ+Wv"*y]L[ً$5lKl:J'Rw'0T6R8Wf@rSg7</96[gi>o\>﯐[Y"550,Q+yr@`7*_iǛ\} 9PY!g}]쇙hab~N|r.zvϚX7b$z$/%={_WWr 2B@MP}QH#&MAK #S7W(*7?sLr]Eٳۭghl#ׯǓh3g1/:mFv ^LLGp)3W`mE(o]/mdxMG<+}K{ `2G0^j:}"FӺ`X!d#S"$-ybV;L)O!up7eX瑧i,qdWcYGO@MNf NJ0'QFxkoLDFuN/5vHޟNjY0Yȸj}WI>>o27˜\pZdcg-$̞0a:%E4(K>rCcפ6T60 )Έ;p{iH0m| F#hIt1g49lMc*0_R?aʅ=jBRfnu&KZifJԍ>Y߅:d]"(I#AsCzrOZz4z6$7O `\n77TnX.KTs,V>XY֚\;x@v Y'W[lSUuV'9.Ip 8m ͵޵qdٿ"۪C@> <@lŇ1ñ,*d sI"m[n!>]usTaE1U>) X )$Z4XLզlR\=D@QOI=i^,>YІF.\>E=uk,vHI1!KutALڲRȮB@& ٥fّ#ZB#_f| E jek\4`)❤ݫ N6h N@kah:>vNq2Qh?i(QLnB!VˮRSb${he±&6V:b .%l@ Mk05ZN87Wg\/ *a > VX4m:dZ{΂UAP#{*IUܕNaRO18/H1?%؎vok۪nWHFdFfC2f opS!t#Xd*zbJe5͐jPoB+"X2nP(SP|k1{jc<&`h&btOH ` mZJJ ufM0*Q86ud&tDb* LEsf+%RI9x1o&mY#h ʈ(Ez@ߑPI@PSC({r{FjM+IK),%VSbJ9o[Nirnvӊq)4cMDr|T1&PT1yQHa8IB1M`ۀY;iL|0sz ax۔^ğFtjQV  mCka$-A7 /FiҬd :!JrAhRU! 0阊'8$;ZL %tF\8gP4YE&rZ5ȼ`|Ж!`(qƣi(^BdVEdaq'K /C.2The1h6A4Zi,!zԥ)G;~R{ȋI /ѡB-10 zv5t3ER6ᡔ4K q[O κ$!H;V@@]BZx _3XC-D;f ro鎠gcQZp!@"4)Y3d8F!t&% EDiv%&HTyP*Z(o*"㬪ZU?rCԒf0`X 1%x_(N?s Ω6pِsJR¬tZofӢ\~Emtg v?S!vʻrw|@_q'epz> zj#> }@b> }@b> }@b> }@b> }@b> }@/%K'5jW|@։"z> T9ʳ}@b> }@b> }@b> }@b> }@b> }@bR}@Bg|@ cwꫝi{> }@ò}@b> }@b> }@b> }@b> }@b> }@bR}@#wd3> z_">{ ӻU'F) u.QR^@O-{oyIͫ;^t2{Z}( ӏo^VBTdv9H'¡4Z6 dt~ɓPb'?Ob'?Ob'?Ob'?Ob'?Ob'?XztL]/_oۯ֯_V O7]] pc4ޮEZ/ЮE۩=?-Yb{t\e )#q}Z%ާCۿTU fczBn,tcu#Q`s?a ޷ qe5փTj!7ſ qy0;9_w@?蘘0Tێ&94i`Tq" ķ^Z9)z#rӳ4CQ` c`Rt> 52?|($($'!|sgW`iBZ g'G!_Z/k?붍%Voey誩hQJ*iVYCy<i)!=\K(\smTn+Y H9+8_>b( !jTrELCVk:u aI?{+n?̖\o[y/ky ZO>x~迌J!n9cN[?jl{]x&!.sOV{tqo_ W_ZÇq`ȇǀ~mI'{YC>=q+^+0:~[g^Yfrzіu*+/^۬~lg WOkfa֭MvLKzZI<Ղ~5і?"m,ƶ]'LGϭ޽3 _^^À-BNâȞh^wxz~瞅Z|Ӛo.^qֲWt6|j9lIWmzv6ݥ%ZgWkQ=b| M&Sq.~=7v]ٺo ej2lk9rbp \~#."cW[(tvg{c ]{;>?79hǟ6wq@x 7S:=Br^)ћbWY5r/y ^=>֍t1o>Rٚ$׸o 5v%T+$C/65t=b'luEKuH_ f̸rB+tgD ˸꟪bjxs-tm Q:'1%Jg+rAv%z[{"'&ʜb7 2's#}6Ly̭7S_f=:+rPgwLh^=OI`#)?rzS^=eۥ9}BH Ѽ`-}6*Hch9":BHȝlP5`gڑKN!=Jۯ"+{ JZS:>8XسI[O8&sqWl*c>Cv-g}ɨ5XI%[ ՛e%q> ѭ!0=!D}oCAXj*y MʙБ#Re[":dRquf~r+.WTJ5wUQ; SҩT1UZW`ݚH IdiF8䉴1쬝~^L(0kଭ=xrw(.VI+o޵5q+2$[;W٤*:O.IԑU߷1ː"EWbK ZgH4 :zKg("G=yΙkv? yJ75"5>ڰщmť/12%,EA99~`Dz1I!5iC ?x<hLCka`1z?J 3+.ɋG#3$=5v':vl6)d&+D yK*vX"ydZG`u7iɑ.XMJzbDjZ Z<{Ͼ?}h~K d7wqñ1&^iGjd~rVk/;^s/:OpITrfyˣmȹ} Yr, \0PxpY6t-;uC_ޗ^W&ogvgC٣sqP/ l+!۵Ki<@A%;|?Z 2i9ЅqXJ@L\rmP&qxK|B҉'0SD{K)C$F`!V9'q,pAEӃ@~Nhg(OdgȾҡ/E2S*B][tt$t4BF ]+iK@*֙@I_?c+o\Y4V(IiYYǏo&͵}0'௶ki+jފAz.NC M(X@ۍG3dan85i43M'uD9h>ybD3;˜l=# RfooPv16F1F^ڐr_ᙅlmHI-`)*LEǸoq W f1grH8+CKG}u&2$Up;oB!DŽ3cĜEtg4k5jwk;azV}2jL ߟ t[m·PHPJ%e':$<$槞 R2$ۤW;|Qg0N(7]0gm ȉR4g o+Ln>n(!iC҆v;DFȘ$C0bŤI !<8 :ś`Ggs,tJiCUͰ,^|N|k%8" l4RiE:Dp<‘FN*Ϥ~$+{?JuiB٩HaB HJNKPJ 3âi%1`!H&6'!ܮX((MM2]5֊nuQpFD,RAσƊKSܧ{PXA BZ?{Z1$:2El@E V - DJ7qRp! p%ynv~>F\%bw I>7c)"rYئ 1 $ُq}I%c ?I1Qs42=E?_X,EsL(KH)D0b.hlr4I2tu>n(85iatHZ\̤C:zrN:iQ荓ϳof&녇"Rw%`z fkka@~\>~?iijWI+M=+ 6]enRD!ߗbUܝ/y@..>ĄpOZiv,G2fKp*ݛF(4Mӝ)y}m~]1cd~V7ϨG>`{;OW۩DXHHka:iUt$RtJGp,t3))#oLeovi_XOj c"RE({URp7qbQ(+F5Bk vM1NyV_MsVɷRR y|yL(xUyCX &S ͙1g*ύ>.0f`5r YǒudQŌC $"a*pL9Q,Tj97 #$霉6^Ԗi08Fn=EOXg$m۹e3"CK3}6dhI\υ -I[%CKRVi!Z2WCFBlw?tŰg#m|8KܱE%tT 9&GpI>I6 MMI\eMju,H+_;ɾI+OWI`IfJ*z.UVS@JNWsLt*O ci/\dnoT}'7_/>#)ьq.} ̦/ΐ80^H9[R8BU$$OߦUY3Rܙt|a|WBxzoAav1zݛ4ɾ)XKTQt_.Wp&[v3tQȌI ۢD5 b~G hB~Cqq| =H:RT)Shc*K\Y$8OVduݟ6YE(n=jY5ʟS8 tiXo-틬ۿ 0 )f|G->Osw_ś|V6lºվ]N (B`шj:Ř){O; #lJӦ|nirەxRVшnYc= UĈ3nyfw}Of,/nC3|(8/lmbw' }96{_z^ c}m^tG?tOwL*gˁpne-VKi{8'#@*˛0οyMV0#)6gF\s6;˜lqV*UͷEY-ڈ֫] @ tUi iitU-Syf ),prk-8O' R-jD! x,q*&~E[~?!D!e [X1c9AEɕ'Cs/ozy~/nG:?B^ĹF|m[ vOݖvuLziֽa53x3U?1-?αHi/j/j/j/j/j/ji/j/j/j/j/jWKKKKN0HȰA`Wy.d؜bN ;IYƥ%~=d؜*tcϠ=BjV1w.]˵4S2^_Ғ%BחyxrDˆ^Fc0BXYL^jʈhA #({#>5ҍ$V(ܐ'UB!䉒MYI)W( M9J=c!F1S&Lqҟ.lӯa(yfGzgsR:a=q(b|{4y9=BF$H+dBVM%{@§e)Q/瀻0b829OV|W%ժ ʌO&-yM񞥤K{i_%[R{Ngj o֭ؗB5XEkIN竺Nۨ{Fnf[aa`RwX|o0 0X{(z7CJVKq~RXj Aޝ4iZQU *uG\GqK 'U :Za+3]>{yv] FHco!Q^mMꩤՠO:A@R-KAevhN+ QhA_*&OEqwăv!_ 7_kBm~/zsm2t i dLzhAA4iQ>-U[,:p_ ̶(:i>vbbNbWS p[l=@1@]>AضMv2@NZh8.  w4fPORFK,Jý&fA2LƘexxsfd 5o;àpFa/I<](f֘f>T6tͷCz21vnͷmr{ -:y W(eTpZmN% Ojj {\<>LҾadeV}Cs(v!$[1q280==?Jn❽q$Nj&5 9Tc&E'YrT3)Yo\;:NCrIfkJ) deBiIC+n4y p7㙤 ?N>KUlsat`t/}rAŶw0Gt 3i:M^8PS>g+8Na770IQ~0!l~qᕣo-sMo)FO˻pnlk}܃ŷNAjK&6ͼ;Mu|:?owWSm`NS!ZNZR {^;2ŴtPϊJ4$W9/-CIGrxJo–kޢft`n1a&kR [b&Xnћh*o2SφlLWɗl2:sg2 6i懶5/WR7?s&C&(|ȩ"Y~ZCtXFP&hqzUqړKx@TmfFUӲ"l%Ae[Vٛ[R9(Y'K?MP!ΨYP)%&Ŋz*D*D=m(i|fB%:ʂʄ8Į!V$gtcnm\,Y:ǤaaLp8%X%2Üyg),acIZ[ Q ;"*CWA5 j\s3u"w%_2^Y2B+*DF5A} uR3́y[a޲ V&jz-̾Aۜ;Wm}{(4E;7#0 2:90pr~v0~b@2Dp|}TX>hFo"׃v f/ &c{-wߟ}jE0–31E2(,ΒdƜ" ]{ܟ9:-ܣoK >!у˼ óeZ:Rտ[SyoJH³ c*v}&oK'哟H5AiRMbv&H>+n0X{1xqN~p;,5 sNlzIǸ$.4܌k! 2r/QRpƳCeIХf6L3vnXYVTʉ7D7Ln*4'pQ:EdTCURiDu~@Y r Xw(pŚ:0, 1n 8Z)e2 P\?}%6i Ygzs#g7]^>J ޵UotkcD|z.ۍCs? lRo[FeEt5-ܢmK,'NTpRFePb޼Rl`p}^Qԕ@!:cx۬cpVKgT/b:.UAdZ-7HY/=cJĄlMEf)WKA2 c4SƻyFe`zI|XfRdэi\'R=X+P-."9Yp&c+H1V "a :c3x4!x}tZ>UyÀ6!ȸH*|I9Yyqn.Oߞ}7o?cϾ;;:uKj%8[,}ԴajjXjs%;#[rͼGysg'5g J?'owF6&ޟ o|yZsfU_e_ GQb!U*6*{]W`ClfŅ{z1׵>1VHΰg؋6[4Aވ|( }z1C}XF~V7ϨC"[ɫ`Ќ. n-?SVh$dfatU$P5u,fp23 ))?8uT9Y!AxӃGB'Ć1"e;]L nJ,+FMf2!CVF~|{%FZjQ1t0eܣ, NmsUQ])TFeIsYh{t?l܌t盂2Aw^ߧ㫵ҹlV<Z(X}hu( ;>0 3}ZH;wMl̋ )i203A"h9EnVG{zѶ8`VLy@K(#"|Ț9^KIUJnF)Mg'fI$e+v+O7Ju{0rP;*xm]jnǗeH)R$]g<ȕH`J:>a8ށ^,EîNrì)Q̠,LFRg&,ʘ>69:l2QV[]fwWJg ^cXj YLbBZnq2x;kXY_gQ =l6[ZtyAxԄwm$.X~0HXS&LZ|T D8r^.lrSU]U]U?Xnm~D, 8ْA2Y:Xg,ceue2Yng2Yu2Y:Xg,ceu2Y:Xg,ceeu2Y:Xg9jkԺW Ɋ68&S8nZ2=**=|W3&R ᣿^KN M_~rV^F%VSHُF[s8@/J|jg!B2?Ĝ*qMIEA:r`;!|NCGW6?[GT\7 gC˱n^OGߕGsh*H"f:U`AUuS]"l{xg֟47$z:o}=\L24c.Ќ1C3fh ͘34KL`Z覽GmZV{-E}w1xߘo9}b> "g߿L'Q" ox7hEKǘ ,~y+">swgsw.߹;l)t Zۦ1,Η>A0%X߼&mTS+R B&&zGЋ W8>P0vY4a#A6^ wE%K^; b|NhzrA 9lu,K |8J̌|[2ٙq췃OZR8y~z01h Y`0̰ Z$W̛V[eM o6w;]j#Qޑ6!,m.BLwS*=JSTaz\\RN=.Q)uN{uJHAn+51:uIJv 1G,Es1ţ6HX?=͎ExgyKkMf)_ZFJMVH }R_>|8TMOޝr^nA3*7*.1+kv+&<"J)ZZA_MT멝Yq0; AFGhrTR뛑Y+-~<\fq:LlB-KCG3l0C/᷉W?V'xequu,b 5M`e'Ѭ1>TPUO2[`D"H/J*NF>tKQ7tD'p&u-݋jX]YϧY_10-R)i=H 2/݆Dcy%8L0+Ql^D{ɱ$Q -9+RG^etGů"x xFQ҉79ufHbݣD0yK ޗDԧT95iNj? ^X:"pt/Iu*1Y$B;N A1U#m$s^DNZMTCk\ķ/[x5aR/1yd!R&R/5eDDL ` Xy$RD[ˣ[ tx[ˉn%Vg5?W4rj21k-Mk"4*kc&PfTetۨQ1,%! j@ C _\ӝS܄.Ӝ֛Uf<ǐC\Q֏M\Q `,Q4J*4e#HG"y- em4UЊ ]mL}%a+vW$e1OE zNK'*"f :8dG(o"Xy h]Rl*HitJ:&>]¤Q_).h[ I4Hj)ly=Qul*TlA:bvĚ] #w5]@ʖ`_ۨX,Q` e +gI197[nph7;+cx%8ǂ$FN@p~$#)BV+829-$'vJ86$!(pf`Eu@W$> fᘭ6#u"DZLn\]MUOd Uso $/ݿEt)$` x'5Hҟ~ @/z:Ir\BÐ ?עCg%1bp"B^c6{0ω`$N`AB #"b*]aKvs-Khi;N4:̋9 ӽΊ/_u>w>vtۭ<ƕ+sWk(W8ng_seY>0q<[R|w[{'mxGyDq;f\CFeӳam|8;G^k;øɰ^T.Q}&xz^s˽-9[fȽGHLsgp*++̓ 9C5wcYnmoQ{c'w k}phG>Xӡ|XIhW:lʡ{͑./kTv㟋A_}Ծaq3gQu 8`^EȎO&a8,}Ƽc Bn{nh!FeV4/70 'EzSmKR(uF<,d(HE 0P!3EޜP݄THo>IXWD9;Eb,cXFACL#e"NHqoQ2?iaΈ0X7Ja -wLzu^ `XsOZ8ᰌVr~ZNa[ 3- ڀO)FDŸ>嚺IK, Q86ql &Y 0VkdL<DcK=6nS/M,#-ԵF:)jjƷW";-||e_u|hK98`B]CJDl@ 6*8G"녣[LVV.{!ulb|dKӐtUbŮǮosYcepe lbv)s 4yh2ZȕhOV>Ji'Vlsj4'BהIOBPʒX(S[.XFYK9!wQ{FKƜ2#+&D̑F#U!t}pp^PwT@i6(҄HXZ,3BR?gFhniCO%:o%:/6:v{tB?f돺ף- ;P|^1;D8GDEXyjN&D:knk48ٹG85 (A\éhA}Sz%e)Oت@O%B# !A#*HΑC*v`4mYZ#gG=+ PRP' jf>0A!)ὶi)Ήco{C KaBZ"gYTS+b-F#e`{q# #; bbe40=ZZn>~R:D Uo#q'QKA ƃD=Ia!A죵qzmc=ߺ>z꙼90lnrL+I$sjM֞E# 6,-|T l:9g.k[{f@٩(_k"\-+o/.տȔF l6c%s\>6I6&9ߢz,܉hGk\`-k{AЛY:X~9kG~diI˵4$R"!!)7N03j)daQ|-%8 _08^DE=A;`jzœKྀ5'C8>zXx8Bx7!`LN1D=*)`}PB^+1Fӊj'tVp x sϤo"uLxyCnf`rݳf` ב†@%%%SJ0 3âҌ;$8"H&WqOR@p/øD cYYʺSߝ3ʴ&&[TP|9KSܧ{6$_vؑ0`N& B?%BRuW=8$%͈;JpgSE[q&varP-^ƾ I~o.M0&74$5sz(1ʸ)3Uys?)cOϗXUu}y߳5g2ſf; B#L9B'QL݉¥ߞ /OXo%P0 (+:mqkrTar1;+OAgBQVX?&ldE)#umveJARC W=^)޾=_^._ʩ |<;q{=S6o4)li5 P!eƔf߳oΪSY~|,o_>,g|q<:`KlE@jŏ S,;;PL֙RqM3]MCڧuebC1)e@0A1iI1m=st9o8F|d۬-ߪ0;AQH}2uiȯ)>٤:OL- ߪFEu;s ?Ogg׏gt:g>'{Ҷ_poSΦb{LU_]|yd}2}][:#Э%Q{n4srm ZM; ^(]I\>+l` hzrU__[^Q`C<h vhGZ&V?˺͖̩|#7a4 f1Fmr>6%`?+g#6=;_ZS3H+m.aQ:0 iUt$R4,Ug"X3))#玎4Zi|e0&"U"g[%gZp#W*F5`Y1jZhd( PlB|h+?f+s>:eYm|[I`P[84;Su+nEu~?.*78Ŝ[nu '1bej>sZ 1iE5yJTt ]MLrH.PH!3cĜE vFXp?k)be`*7$uzrzK)oCҥS+Pdsx. >&I @.eP_s1)ZlMQV{XjzxˑoyO@Og9;G zp_b=㼓z)s3ZxvG/%ynأ }̚{(8E9YhQsm?jj<352s*{eF=!d~]yl ~oש QT4TΫ5w J+[lR7f<܌G:}͎ SRO˷n/߇ݻD *3)=͹b狑0w=F?j$ѣLo!_jJ}Tc30r!P> N3,J@O>oW^ˎ?a=_v !G`d ;.u`:v\ΐ lPrܘn0x5C>WKZ4CnqO][BC6p^afEתxȓ|<24.#Iבafo- atu0EO(wEΐtp$p׆˰վ s3vH"vˆ'#ynv8D׹ݽW CnNlX]~PXWShZHy^vگʹrhkxcmi8Wқ%cOqؓ%߻^%fVL l~,9B$6oTyA rH# be}0z)#Ā1<)c"0o _bw!]!$XOxC0igOϣKtŖj}ul5sXf%Y67/od =!"=AZ3E4^JJ8%\b3)aH| nm}ou'%N)gvs&qg0W$nKei3pYWf))COܣUBj.NR ڵ<)Ym2.t!N6RI 6ms,sC7/ix_kzjeE }g҆/5m֙MqKKY9j $ox0Ȃۋy|Y9DRcse}>"NW 튧l(Ɋ ;1oGb_V(I qb꩕T-cV?dJQ +N/z((+cYz1 C 'خ<wph}fc>ԅsc̱|>RAsq# $g\KS4aRUF}?8󹢥Pcwfw?,cHC!fH ^ wE%`x@Ufaz9*Lc"d\Wyo1L0j":6RؗՆlc 7㾬A-Zl j4dRm{ ۖAgR_yk:LrJ͙"<L0&;uQ iU`e VN+k; Fz'?]veiqTN,:K-6{w=JwWy3w7խckw\Z=cYA3#|m[V.J˯Lv'~VXbrK'7yzS%B^N. F{3KOndeS5AM/ԃp-}u"Gz$bs&bgǓ&kOel(`(4s|`h^xr=JOS3gcT$ƀ,umsUoy{l+8=bE}yYK{08kݴ{Kn KAbF!YLH%AthO5< .U~Za䐤XHMLc1>ib VJ5nL#wDŲӯ^JgK1xL2C;Jb4'e% y IC[x<6 .!,C3.M]K<`oᄶڬ  *| 9]7+tza/U̝c2g\;NsA1CG!aU$lF^* йi Ǟbzbq~ss=[SS`mwWịUh4BkM̨3hQ9b0X j C (cS"LlxnNف2-=ڕ^_l]8}D@.gZgKYmgMt#bɂ-GYm/^Jo􉑃d8HG]CI=])iaz9N`.0 $- T)J낎[qpTw~vmh-R+)&[)Q[W12#XPq(.^ri*yusσ[ۙC_=xr3 (eU4TG J\B:`^ek) Ӂi]IՅ Xs90hWa iU30SAbA3 ('@c}9l4l <6+10BB20Pg($^2#YoJ] 4(bmKX<8mഁ䴣2G<2 6x,ӎ~=~>Eelf o'_vH#?~ JjLVlRWtq&>Q])\s&ɕ%1W9\..Pu!$Eԥ#qXe9eXEh))Y1G DV YwiJpp^VA۠wmG_—.^jT*+q]b+Xx .׳k][H`g==!M$i2#$ŀ 3]u'C" " /(u]kq̓W;JòT K c(HMDsHڱJXq#SRC#` P6c%sL>67ҕxt's0ГHGgt}NIXl&yd^?g0,D$e(X&iEylQI seasRwAD bR9c܊TcMMЌ9W>/jzd%ضqk+>Aނ(4. #CNr C.`k \b+t@HmOHysQ v :#g7S9gƖPhK)YG/B%P7d+w>y?2o>+l@ >:JHstTE)-lnƣFD?~|Oa&0_Mm| " +J73 [!D6FUu(Vm2{x'Dl G":$Yg D4_ 3 'NrYrb-ȲݙMwf2so39wTRw_ _F컛_C. ^nLY_URjb_X[K@]^VC5=:2El@*u U.D!TyJ(^dJ* ZA;E/95UHI m&onK߽eZ1}@ߕ,jl&2Ѹߴc>d|HtU_Q_m͉TX!f;ɋ`7iwu߆!+tRFJ tqJ=[e+-}WWsHM.{?/6k/sat;~hGv; ie#2),%rVl uA2i@` <[˖涥o9i~-6] pb\uQϝ'?f/-i6 Ylr W̹aD!]jpc˜T{fKhOol^9`x `hn)1HN?{^IK, EEP&xj,]s$*`<46^Kz$X{RM2RR%nRb΂wb8ұ׬3rv{@'S-ޭ 8Yzn߹r^ԓXbGO>ƻG5 6V$:"V$ t&Aq$<X G1Tv"[x &;2g粄+9'.߆>|| hr\Hj-2e:4R& 'U!n@Ur 8e(rnq Ȣ`;X+\$ D h#vFn:_DŽh1=l ɥޜPɈ7غ 1bJ.Dj @T+8&R\|}/~)0ld{ a|̧ tJG`N3+a&j:#LRbr>0LK*WsphSܙ 7,0>0@uJ~-$`waXNS!}4E`΅Mo&ܦE Mu}{woןnC{X5[ V`4_V1w.9O"uvb8 FZoxNj km-q CɛRuU׳-mFOcu}47/ Ƀ(gܲ\Grk-8O1WzK8N>T[&_Em&Zc&twNoTwo-00&kW5elA1KѨc|nnKfG:Yga9Jv.j4p`H!fO܊6sQF1h^79*Y'.'.r '.jq QY3o] &;Fцw} LݻHTE0J* ~Rq-26K" `nT֓M$:B3#!;-Ӎ/R}Qm_7sA'`NW"WSBڵ#B@%C A($'\Jkq2 ȥJԒꬮIu(hSǤB*Ż67t21f^ZS/ۆЂх2ijt;ɲxMeEUٰϖru,k_lt;ͪŊVK Іad'=gBGRbG K*IK^ܤBK : vJ-;MWF3.xdo~}}EH!dϾNcdEVeZ ֍D:J:>3al_DX=]ρAU.B3U-4WI#_%|4U]#BkpG#쯆hFLDp\(a%^S! 4v }Q+Pۗw:!;+Y\y2vVVcJ9E;KJ@v ~qr6[7+U~~vQz2ne9#lgѴla68wx)HtU!So=nEdv_wdz&HeҼ*?O '9O7R38cM%Tz[yA%<@rE}%gS4aﯯT`Nc$rDD%g b Mu$nvdVt,8%G`|+(X-=F`ճ";t)O/snx;b{ iFD\V~`pc #"R#,S cĒhL!,)+tLTsT*)gc*U.5% LHÐEAy4FZ9Q1X央< Nmpn3ZLjq| tEӞ.fb1nS<S^^MW yʕ!QR`G r] ^HZuH(U1Kj Fwüg )7c;FE?\YLcG?+RضJ)w8p_&Ț̥Đ(- )θ;oDx!`=-:Jn. h.FK5"u&s6vZdR7*oVC?׽Bo7ou43P:6-O1K_ˣ.ι):K%1늄s__J\, 6duQ~~EI: E,[6LrnZrm ? G 'ڿIyj*E9\y*Ts`^Y橚G=:]X(댲vlw]3"} 'i=Jn;;i_9éÔsQBZ'깨PcI x$ )[X1cyx豉hjd ix0D0@ PN)q[K:(Eh2[ e7pɗ4|,A CV[펮yףi')5&r+itoLCۻN~P s1Ἅ|rhCbu^=LpyvlBZβI7vh`Aϵ!関U|߀Gøij|GK^MG]9&膒S/y̚XezөUW'{|9 \1MS_x J>Uzڻf' @*q5 m|l{ k.jyúr~h0:U ͙1g*hsc lKm1WOyUOfUO~5jޫJKl18O9QT8=X`-P0T+6^Ԗi :wFnϟN`oFk`\Uع3&J]J(Է=n,X-yz4m6Y** #fA+=WIP\'SU ̢dNe (WKM)j 6Q5(K)AʁASaKeJg4BcKJV (HDyᐶ:ALqFFd̂ {t؄8>g*tB]q,LdTցh1PA"RkT )d4&jci;踨[ SntL2&$3ILM:gz8_e #b8:cc Nξ!U"L2/rzHCRbS,q5=U}]]-B ZkEqab׿kp1ٓ'KܝMBw'VڮX^f#=2[NU6p/1>h Y|( aR/BXYL^jDD h5D (ҙWL%-~4̲J`f jйqoklE!Wby3=-f,sYy4*k:՞z̨3hQ9b0X4C&@.q6#x%z8Nᅱi8d,{[=jgY9|'6duBMvvBXxBs˫t0T8OL@̟n)T0s]T,RN۱pzZҧ{8`4Sz?ؙ>03$Q~:L; CRUQ 8ҒPNbXXHzƞ?@`ۑlXJFHqW>RL[)Q[ObdF/ (̂ZtmqZ9]-R}qòÛsAing8jm@e"@'ZK(Iv yI"R:"* Xva iU3L1*h&$Z!|`#6Hcr KccZad@Pg($^2N #Ks Fbu,-[HkO=iSz=%'fo}w>AgJ5eғP*|Ʉ7$J:ǔ &X)kC!-).p/04aS) W8e 4'b$4y? %A&Ն)T4hiB$ `ͰYfv?Ŏc: ȳ҉ yêI9Í=+F)/U1;Dxʽ%af\`5h'X (n %A*T#8FgA%kܓtXzs^@c,fřA?DhAt$@MTR#*#+TT!Ȫ0EF8&Ζ8pR:Eǒ EPHp,^[A`rrco`&[ "9k!Ǥb-+j⇌$F2ʀ{q#QR`AQix-FܨΓy#&v;ڔPo#q'⨥ Xjph"4[Ñp h|hD[UAQtbL%ZUr]GDOpdLx3M+I$sjM֞E#Ҡ6,-|T l:9gE[m<u*W3<eSfoY%AȔFXE;/)Q J |𘑱c7AG6 y(:XftqtzTLB+sD$ &qWqFeGh@epcKiL&4& aM6&8 ,va߶od(aV/$tax5va 2qYƐJPK򑹔-AKȣ;VX>4"""(͑[LG2e ((uiLt)N*q[r 839iMGEdQ0A[R. Z"D4B آz8[LkVe]=̺\B+]R̃QL5 4gJCQWUcQoUڠi5ZLKkx%tPWMH1w4R'hcqfj˃bB.ӲWg^98ʥ\ 젗anL#!P:,LZpV!۹q[Matx^VsZ7 ˷Uqݲ??Ojve;/M]FYoa1|s  >mt̩t;v :Wi >w20@yX=W;ʻ?\3gZܴ9J->a[,;B[s[|蟧ۭ[-h6j9q9Z\>.+"s2[`<0f-Bpuz``xdP. ˋA^9NrKM;v0=msظI顆}9Z+wLVni tp;RXsz~I &*r #:| kfZxg&c~}DwR._whnb=T VI=4)7t0@V{C͠;@a&.cܬh'VJs: kXtܞkP ;o k.؄h]t9#>z?K3tIt9ArE{np^ 'X}*^i:ۿ@9GI@9 bi1+M;My?{/@qե:EK` qJ+"<Ÿ\ŏ%\/_z&wvV̝yժT 0p-󟓘Z tL]txJ/,N9E[8|{ _,{esz^cx|NK0Sep!TEa=u;nrAN"&u:(W08IDI8FҎY |H(R5,wU\l[=c*QFBӆײp ^J5qkE"h9vBh%G9 B9Lk1:ut&Klfw:R0$U搒U_aQiƝVc HLlN?rBN{!VǍ†3ʴ&&b *x1}¨"eYH:08*ժAxs?kH)ud؀/8 x - \PhhHyYB,UKf~ EϿ{[qAH4vzH fsi1;QZ1 X8Y}NWƅ_5aߋK.+JkJwc"XLaJN@ZE#L9B'0'3K۽9^.'צ+aYiC(XK Lʙ&%*Β(4_&-VҧMqe*C2%WӿK^|y:ZQR!`'0۪t9G= |REOT9L~*iҺEuZz<|f&ՠUKWzl۹"Pޯd{o.Lo6{j"{l4wCQc7C_(4ɓ{ݛ@o\vF6#A{>dS Ϫ0Yi+-L1:>\x+i_ݱcpS&ٚ Љ?uzu.@/޾0Qgou#0 IZ,A;]_fvMۢkr>dwgx~p;뷚v̷VS=sgRʹ|fיXo&̊;t%s~_9:bRyܥL0"д.6=߫c#-EX",9[Se?Fv&c8gW;gMc?Df ge8ȧ"Njcb /~O.aVh$50ʴ*:zҌG"Xf=SRFo'u:2eoǸBrqttDPd#,xL nJ裆5e4B2M,PM)&M'ZLg.&3lE=K Wp60х&Vð ӓRSQYg-͐ƁZ1gO N5h>YDeXrϘ-?.~w}em(|"K.+* rR߆Al|w75p$kո-|GLf.z$Ѯ服>7uW$i7 3HCUeC-ɰW÷T@Ts"Gk9QҶai0.@Y/2:/aN(EKf-0x~Zeu:XcVj\sw P1I5`kmHŀqp<$X06 \ӤBJJS=3(7MLMOU_ 2QhjIʫVz 1&#QDv+҉:FtyHki'_n|u&6() YPᨋ kn f0nzg׫~)v"g A<ȚY kd̓y5٤APY y5A<ȚY kd̓y5A<ȚY kd̓N]̓y5A<ȚY#Ķ[!Ƌ g,^3O!xIk??uTydzOx'zp*xm2O`3*ĵ?%"΃y.%t+E+"l qG5Y}Os :{6dSD-"ϭBh?~9&y1- ˌC^=ׇP돭tП#o0Q.] su=:o0σu|P pC7jfW0oǫTӚ]{jvMۮi5mm״횶=SN5mW^WӶkvMۮi5mmG"mqsVHSO4"X%E dKnf$ͻ]'Lh AOf4 ԁGlMma`j:YL7?+|*L0ǃR/285.DHyr]Ճ];RnUBnNaN8m~S0/~ZIRݛ̒.9IP-W>c"FYj֜IgJC8"YA!@7端gf<٭^nG -2 %³M 5Y<&L!*.O-\ g]WGV bcZo,]W M6dRK. ! Y:.cxMxU)i[Uӭm!cngW[^-R?X' dil59ܓtxClX$?6n8筮~H(r1vo2떉qn47 K@ s X?98&)՛)6Uo~ dUH m:y]u7Ƶ<n8)^)I6 bSa:l~'[WO\8@6/b8`7t[,χn}Ǯ[= N;`_Ϫ~,ܨ~$>扯}R>)K+OR.V퓾I&u{˯ Fz;~l}Xx$}ג)j}צk==o 7y>2*)2^_ GY@Nڥ98_ ajmJ;WIs0G[e>Hm 9t*cZÝFxakHbCv3N!)mfB %c;lQXM<Sd@(8a|?wNh#~Y8k?c[kІ͠q8޵4u]o3|͗ף6;mtm0q_5k7/ۗ!GaLsUKr nx; uL^{?OjF_FS"+2?,i0vw}{9ܗp7:6ZnD⾱^FqʑDЮN؝^f@ϒ|_c)hMd{1uj\0&h@` -AxNRGNrWf]8l7i Z/}] H6@bq|7Ub.Io ^N=(O3 PFPq$O!$0KerƁUR` "uxfRuQT(98gg#X{7Rv]Starn5=t+ΡQhх33U1x oJuJ#<ЊXPnM}_I.2^@f NSK+yp>XOrhuӘګ<8[VWۻFg|)G y_u][ ^Z XҎ:oVm&R')LI-?L:u 2b 2B橏!n4#r28blZ /2ObGx~WxUjrX)11!qIo:u0) 1(8PsU|+毢 -WYiНܱvՓ h`49ήhu7< 0dpgY/pN :!&&&lV%ڔ\ު}CKRiH*1Q#B$^ȌWhkx\H2<6j{T1V10f %Di5JMJS`)獈**y4ާ63E ٖHHH;$^5_Ŧ+ v ]8R肴 Ɛ! { Xl *@cK 㥓Jb/+K-sskLFo)w;1K)Gg[9J 4qFn%iFhAAnRy.g iT;)9EF}bau)e"ٽh_ %_kGR o-Figc^CA[JHHIKZFN HtW(vᒮ-Y4ys8;WJ;0.Z䣌HVM$Cncoy4VړZ8w0Δ%r[-2+YFL(PqVLq7vN.6-T<}*ueBZECޡ*묷ZGEA͙ö|{[QJ;B%!psJ_{E>Chr/ U ۤL8s0ŗ^V1FO~ j+a%A0GhbT1mRy,j# h@`ߪ8(YSm{]}ɪ*NRš>E+.S x3{t[DUF$)`&$cI&k'AJ+툠lӈ^S_%\=v86%a$O$<@edg8ARR1c(䵑T*WG>2ڎ-{Stw+˝;wމ`"""(͑؁sN9QPHRmR,VaVsL(Q ȹM:2,"1rbrX%RHDc.mg\҂W$.9mڍ۰QWI>Y'T|{ YP@ֵg|=GqRɸHV&aʬ7kA̐RECYd(YM@|(wf &<>(#ύ>gcr+"8ސۆ捞gg'wi Fd-6>X+sL`t:.Љ K>= brj7:Uh s$rcGE\txr& eqrE_$C=\-uvOKuܧ<_=a||҄NKoBT bz.UQXOÎ\ 1K$b"X *ap)q)8FҎYLZa8z<UWܰ؂wj.Ze5 j.j׊(D `39 <N$H#'A@(I& LDp!w25ZTEV#FL)Y%* J3K`Ȁ$vcEP^ Ecҧ,ƈuYoT(l8Lkbba 93OOZ(,҉ R=? TM"Lb:ڃ@kl"r VF[Fp|:'mF!ۅKDH3{::B$\H0Ɵ4$ɾM\)=Lre~G?_x }֜/S 0G#Lٟ#(nߔe8rBd?tܛVÀP&~\:$U\̤\!Cw;LTYcqldE\| T``v2Y9qS!٨w7z իI+6 8NT;wj|u?w^W7^Ύ /zdqb 8׸K" icS,; sPL֙RNڦ!!+Y&>JF&9ˋ9'٦h$hl]FT^k6XÜGԥOK_1KtxgzAd@E};s1~᫗߽~~q7?>|!&^Ο*&V[6pͧM ͧ`j|7wIw-^fżC;t Z1_^aRTM#6/f)|_Oaf סbP_6 !pr;+`7H\-\lɜ1zT&a8φ7@ vmcDf pȧbK Dʝ`Ԕ?Ǔv" i 2 B\EG"ASRlYLIN(Ō{¡¼Ky/57s-cΏ'qE3`s˭#.2'*Ҟ!&XunQ+a%*2ۤKuJeJ]I Z7\ C2jg>OLj9T'@uk6SfT^eg,p_#_IyT6>E6璊 :ϲo+]WmK_͇S1\4 ^@hwn;]uw^`C0ח~Jo3Wwܽ{AKbM*Խhж\7c y叇Y27# lT:/aN(Esfͭ?p2uu.1NzgQcF10eX;i:J⭩5xkbKӗ }c۬L? z1<#yW▄K*Ljd<)V2hA qhʣTۤJ[1vtknN>\^6kmM> SF{P0v|XG*f]Y{hYe=||sooC t{ i)ԞNU>+s7;õl%ւ'6ӔjoS5ˇ9 gc5Ϻ:xd⁙bcՕ'*Z^UL-$}SՅI:nR6z[ّB%n<˱f%)%U|tِx%c, 5䧀aځF},oqoQ'jWͣcEy͘ߛ `25,vlg\1kAP[t`6#&UVC$82&fj[NҒN@ta]喓uM͊e@XQepQःW]{ʕ!QR`GRS~h @ЕcUѳ Ƙcl:RAs# $g\Kr1VFd_}PA_V/ `zw\- .dR+1|b+f ),Z-"S/Eojmz(Xzk99N+/pasƷp!ȶԺVx<wT“qU_d?,„Ms^Icw&@O6D9$_lL I\Hqp]˃̕qd8׫T 6˩OXs34VeY (Cٟ">2b8KLcs7xϰ5XO&ѿ}XcnA<#˯gpR{/~|Kzp~j/!9~F>9|"ڡ~!8:7b,k{_>L9ېK #KhVX!{xckD(8ןܜ7GCwϣu'w?J EC+-[n?E$h>Zjw-`UGZW{8O6N18#@E'}{Nӷ )+@((NQd@U)j4EeRHNu6D")KHSN9t@Z";;xB!M?DVql-=뉽ҳ]T@Pi'![rƄtR:`` T >s)C`]sӯ&y|4/l;9o{?>X} 7+.,j`MSf@<5ZcemtTRsUDTAFjDїJKtLHSE6X ZR[NlGq;֘Qn;YsxncoG\1(+4:ՍM}݅<6WZS-6hMl,}Uť8Cߺi^I_< k6 W'jAbI+ccXAT2C+Ll#MV@T_ASM)ĕY}8hzb6fr(/Ї%>tm!ok(wIr./y{ݕ.] ;˔b˯]2쐒T!`L0)0U]xEWU[PMيmkY,,0TH &Ib5O#z`NIur3^ O4-BxW *@@[hI> P*5aI$?z!2E25$,[#Uâ# (s΁3^TUG|^@l8k]S,}#Wʿnʅ k.o"Bb6.u~gEtv3k$R{[U͸!|{<3D9lQ'=򙬟PsO1WxRadt!טLVPޒ֭F|{e;oU{o>o1rׂW!sj r<0ê8 TS \ٖ ΔboA֊ruYzx7I&{G>Q+@-msS#dcoMbc[ҲM6ykzI{+8m@|QI$ueL/ Z`O-jN躗.7撹 B6e@U>Ӏ3 { 2]8{j2 ,֖ T4&Q$ ws* #'TQ\U8ւlZV#pW^`,jB^9*-\䒭~+˃KQYlCPX[4zd:h;Y«\KY'SB(9[gIV&!)UL.j~<}`wb֭~{9 ZV6cβFH$@W1KV77S4Io Ԙ r o8@1k9(-dY@Hbi3q,N[7OSEx̲+oQJ #,L)F1fb54F1dk2+Qzy5*;FkPܣ{܂m7n\?b-Lma`pypvı\C$@AV&YHѓKR6°g =B/yތs)ډ+*rPٷ+*JMQjM'g$V3{K߶r! RƨbQtu_,qG \Uw[y쇴'c""8|QWK0y.ks篟j譀ko8àBT&`x5tt×TVpT{L-vyG8:,8 Td*@v:\Mi6Ŧh,MM6g*lF5/u!XFIJ@HJf+Ml:ndNf_-Ϳ~"ڸ2a|G5H7zsdov^R|DSY2)TceGx[REb|֕dguHvqz{'nƠaKy!{2dV={/yN͡Ajs*m]ICB1yXKӇz}IJ,0҄lF%LBy\d=^ptQդ;4`tJ&uqСzkYATH'mBʑ7v isXm\jpl#"r5I"LA;6!ý[|~pvvl-uϮO/1ed-"E򓊯,1~E;/띿rh@5u28;nXeDkB'm;\jg-q+9?/^+[!4Rک$hCP`RqϤPEk|d浀|8 g_dd=LLmQU 4E?DuhT33H*݈ke!=XZL_>+iáIMph҆æͿq7 %f-Kc;?Zdbشd.:e8}X) .'V}|tFmAz}7% t7A׻lkBkFN|]P2uEk2{Vz/]kf~}٣*4$ae1X hȕhU_RjxҖ,ϒW*FNZ6^ǻᱞ[m={AyS9k]r}=;BU\vuh\Jƨl![YD#C"$Lֺ\\\ 4k%zNBc{Jl&՞Ѫ|uz =dB_ )eMIr  Ny.\Z(LB9"VHҴ0TMT]ցRt{Fl ch-jtL>Af@^5JmPcp)6e 9i$T R%ctB~%fF6;AS:@@'ĩlpZ<N6ZSUF `'NS5jP'1Vt|ځM=Z%bj]r>e(斳 K>K7Ll<|-U׾h $|n<a=0:n-sݤwg2~kI֓!!s8!]/ϭ .L I'7H :/#>Hl8k]*DOz`*?{Ƒ Mܗkgc$9Až!tWwق)R!);b^DJDC?2gWua??J,XnF{&scNtg۹^DZm%QD:浻wX5wYEL):zIKŒT wBR9#J(e-S`.bvҵtON[3{zqjgˉ?K?R&V:%U :,dBvxX@ B` t p遝$u7֙uq,L(m- !S&2-..$Bi'c;d,x)u2uRg CJGCG'.dlMXv'i;II]Jړ;{=s .uRp>,dY \VlOI®Yϥ]]9-4: d˨ :+DadMbDVCA67 _.!A) )]2 Z%#0]gv X>lIu6ۗC❯Z/UVoQ꾯m^˿śI!d/#[vQ47FE-(XQR`4RlGNr;܆QͨN$x'gϬu0b>xT *F T#UFYKHe {u*gQ+J+dF.dr,Ź ETd<OMbYgPZ+QbTXe*/qS6犕Ji[e쭤dd`']v ) l]Sс^&:FT5@ Yi3Ie~;I'{lUs b̉FPfUҪYH,Ek]l@ɾ;эjc!e#2'Vp˦p^I"iIW^3δzN9@BPBbNl('MWBeR E6p'mEDxo'|xƏE:c XL6*MZ"l&e#1;kSXyģ;Xu7"qm:+AOհbO[POl $jZDɖqM]/GY.Zs*8#qwV^ʳK?m>=u Ь_/,uBTIwhXRdtQg u1Ĩ@E#X{ SQ(2 p!m݆:2gpc6%< die5AHJFQu :uV~ hyHf^!~a o_-pѽwU4}ٲ) '4(K"%EflQK;Dxg+qhfO: k o&Sଠ/|oȿ y0~־zg"5t;wyDv2<(fPWY"Z @:Z |$E(LdbDmTZ:bI5% `P I=J iSR:kvLj Т=Z+"㦎oP<]Vl=bv 1S ᄅ!fq8OFS||d((Fs i%bmTqE?{ rooKrEͯd4Os^緽('`Q=_hjl;[G+rsyn+J`$ߎC"Q9@cMd=!R=dCr-qǂ2m'8 /&ڌX>Yɦ%(<WR np66^dݑR"kpL8=;ZbsQ&8vGRvAR"C׵]mg >L`_ó5e59\HAa+/TD1c ^Pr* I*rnf$C^|':BQ4+U럝3B)זWx 0ұDȂP,%Ql2T PAݚ6 F]рT:_) +r&\`- K6T~L&9x@͘fF $hP5YaNt&t&Ql؛$<Wg_K!쥓~ZSٛ5"Lz/mP$ohO?՛0#o̸韽!6E, i.۞wΙq d XPRH->x\Rt֡3(rgݛ^OEm7텗'^-~(q8ᇏSpFۥ@৓I{.c,/ ĚpkWsJGQb21QLJjɇ5'W=^OF%^uV]^ Ξu\X\F†֯>^={&eo8=K_j'9r?2;|^oo^~@*޽yVoLIA6 [u xKΖKk,m|O=#.yͺŇvn¶O'ׇc^&eau=#?cRO Wx,Gӳqg_w˻£ܪv@`]\z~uFk+mV͒xI<1[S͐[NU%`]$obaot AԷk8pxw:R6:&RB2P% EHLf 9OSwё~~C<MTw7qz.8,iHd25т(SAmu CGM3J(w>=53 K}Peu8ᘚkt,UJm޾{Qw{MRS:N"_I I"࿤ hF r *Ȗ?IlC/Myl5^iS-R(KYg?^@IxY wZmAFJ)x%#R,[Q-ynMgA4& \Clӂ $I)$f-&D !g| X۔  1J2Ti)DJA-T]P(k{bm*d*)t Eu(e8 ѴRtGAE{ݴz`eL<?R"LfgsUZ% {$4Wmm bq,ΉPGl|!G# RNA#"7d% W T"C^lm~]̒~~ Yj=Y2'4# n,$~ޒbObr.71`k{Euf:E6d%k]b lhDoppL/ƍ?αFNyiknKNu fe'=~4j!R!:(0TzQ6E5+qNhHTrepkw6E uުĿGz EXp& 6^0$Yt 2lO$ f-wBWbRˢeQ*: }X Wl_⸗KV>u~n94~ֽw\wSz Ԡ m͜z9_8l^oVw,`U:#r ,.P@Zo~lr>ӥp]v&#en Ԥx'5ju:s_ }07+t#wmH_f1p0 v?p`@Iۈ_+:_ՒGKL[fՋ*Y)+Pz/8SȔIrrZG!N&M"ܖZhjА66x3#μRiY֌;oUhs8P{Ǖh!Cugs]գU=Q=~xE=: nO!MʯϚ"[lMG?oZ=XQבt~'!ɬo IdqJQP tV)K!&eN*AZ8#< q2q}@b}U.%ѷ.sv6%Zq#mW_Nȣe`aV?:Xǐ,ߟ~hɲ7#n ؤG_y_gdnq=@Z4bc=ݦu | \G9syϳڃk+bcw*U{P;En#I*|H?Jڶ+=|hf9|rCrBࣤ{ ݱ G\2f6OabLsM+:{aѩ- udmӰٖgrodbn TӋeFmgU-1q7vO,Y)}DQgfDKuM>ɼSTZSօh}k}u+ԡhY (?5w^'b..GyHX;G9h0h,D,}@ w%&909%&`&o~Hߪ^'j[=oͦ˪6*{y %ajT{' US=,ÆG//pTH$瓃 *۔ C-#͒aLYHk^(G}/ZԚYe=O>[Zj Z#<0n%{N*!TuO*Wն}!|AX2c7F4lНclK]%Uڜ 5ڄ`iDCD7QrR#䢙R(1O*+ng\t\ޕ2o8i&<$уw9$24زu(7gyϒRgҏu(w)9:CXŲ5~Z5LI,fJQjёn t}"˫w;{ixw'[ݼQ2^Ý).W*qYf.s,m`lՍѺ2U]夘uZ@|ּ1K4%&e]jlh:էʝ%Vsu3Mj|!vM݋m4raē6/2^>rkfvydAR^ڛc!R?, }6EÍk3U> u,n1}rg73_ŀƟ]~;;GQ 2+> &CR EQ m&۞Bɘ2C 5d=}h&eUi6%%0Ј$ Z!]!fM5 6bl11cI+]6hH!RLp+ 9+rIR GHH{{Pl98ۊ^՛P;ݼX$Yls-le1\Օ +Jѿ&M6-heӕ_oי.%_NoC0y ῞++e]n ?ʵ4'\Vz–Y&!!IgvC*>-?mPӅvk.k"qoEbi.Bw.4*"V=ݻH }Os' ;Ybh/6Y]O}yI&7T\J}7Nn.+R_I.O'hc˅vU WŧxsYs@QFaȍ,XlJx$1)ݟ$r)TI!P}ks&g/Z˜,\ɤbIdrLleɄZ`ʑpt4ئ!]3dVhuҺY)0͑30PFڧ(xCazVr&.Nfs4X iB)қF3RN!tSo'0=z92#S!x#85`y4Bp琒V,]+,Je)RU!H494Zs$*c9;&>4AΑ$ òp A,0uF* $: AM2I L:)Ao+B&MơtY> uyeɱļsBT:LXHN賎V ʽq2F3Q cVDm걔)Y6D^ZbddϬ!SB1vPM-;Qѕڻigm] q@ FCPX:cc)v r&Rk.X:M7YeFR0@xdV#gKK#mٍ۰Q7k}mG񛹸zCu@\VnΕ"^x#>Gl&f+qӛ|=5콑`bH4d26y)i3^Xܯhhr6kOaMf^ȡx!r9$g2B]+20^y`)kT`UV ݘ[h~~4n!ۛ%~):*k__.ސzHCoIڿkڤKeFcܽB};3\oePgJ?g0rmwL;JUF ˱Q7Dn"*@QcՠGScՠjiI,:íR T^ecx.>sͬ&[mMlh 0k@bJ[᭒l/m Ll ׈tP bBgpGSOWn֡T>,+H25#a K6h#)"Sg(8Y__ su-ثsm]7[i*9EkmTv9bgC+ Ȣ GՋmԔӻgC_5|2d]RJ&g&HLf؂MF1ZcްNXcMO7ɃU=AgE?:MJKHgR=28)* ?^7 \tYI6 iI[6 iFd6LHuO7Zt,M.E\(,c ?|O""L\9Q[""i>J9MIɿ.<~) fWӿ*1?\wS&Ŗ~<=gĿfԔȁb3.x>O$ď~6KlNBc:sMwo7"X >1g,dY(] ϫKO6O.ҷ6Rw%L|+HUyzӍOnφwi 9k~4?}ĝFє?jЗPW\8!ޚR-(yr睺ѓf_p_w~mo˺qpdݮZX//'bggXLޙ~|iD4N,# =ioǒȗ$C@>8޷xxЧ5E2$e[Y9("i]SwUW G%L$h8nzzf?[9~-_d۪-{UQyiI s@_G?G7.c0)lɤpjzs߼;{u݋?{ūwg_N޽~R{8Yp @jk~^,{KKΖbk,U|ux}[nY>- ھ֒({{пOMӍԬjD2CWl "rs%Tޅ-B.}B)m~W)PD6mmqcclI*F1SNaylџo ?Y2yF=)AAIM~luOh4 gۨDX.i$50ʴ*:z*G0,3))#qgk{GׄiZ DPd#,xL n㩋޲bX#PANP{N|#ʛ7 |ɷ#^Ah`꣢*T(nAuq?.*7ł[Xnu '1bej⸏/چXu[s]njĚ ZБ7\ B2jg>MLj9?Fl*sKT>g̓.] |mtiX" {3Dg >*Qwp%mz7 ?G-:MڻǸ9@辕MjW{[vt;颞CXK(k3=/n=,8G Bc[ϪӨzJlX);U0Aپ(5mcY0hgiz>gI8N 36& Ef0EA3X?p2mP.mVhY4NzgQcF1X;i&t떖nfWi:<ܳy_؀kz{XfK,5%0U>ObA_K,xb'" aa D(0B =Ř}_g^F޿uunҞ{W`t~bI[kuIf>L_ @.|ohMӏ@o9x#B r ,DDꥦ&ZL( UeR.Z=/9w4РϠoZی[ 51ڥ~ph*f]i?}2dKE*L<iX{))@JnRŜ(aH'=_AgK 4y*kt-1DZ0b4!EM ?gHWX&}..K;S 3:Sh ]?1aeDH3o ^Wzu!v(]ԣDjjs߮I.x\zm#12'.w9Ct[H.I<;bw,\ߖO՘ Ntc #!%daapbcvƓr@=2]1o=cJVhI<q\/XNhVUZ .}m}'??^.xAفMiQqv87 r8R"bO2$`1J Gˍvx!]ꁀXG2i^b`Spcz3.ۨZj'3olMc)t9iR9X2}s>#|Y<̊/^.Q+R[0EE&zG9ys:7ҲP0vY4a#A)Y@SJxAyl~MJtޕYhYz*MK*! 1:3 \r0gt$(ga.\^ qOՁ)L$E` Z-"کo]uxv?n]otb;,;£8q/P&hrV})'() 4W_<5 1ta \J&hwf<3glD05|SHлmը\ItA/j7/' Oz?7ͪNLP!',>"I@4̿ApYPޛ/s͙a#`W{h:W :KԌ)°W8l`Ox >jӫox:tpLԛkET !u0*FCQ16W1<|*:&>vsxUdaWdWOUWw7ѸyOj\.VAPW̴PKw8x%,*Sh+i!5L\P*WȵBLGO^'3쉢T+ GpQYT3ACUV#td#X<8pGU|ߚG~YƯau[9_"_-: !8Iu9Ĩ%QFN%BG"I}ms6̈́ 3}m3Q侶mkf.澶mk澶mk6}ms_6}ms_G{DdDSO&Ƿ-+Kۦڻ8 _6Xa1zB.]0ω·DTf5 $gϣvN1re#\&L )hҧtq 2`A1&aDLP*8q "BL1jveZ^jFuwΚs<ߛFLtsu{ D+|Uq<^SK,銒q0[3+)Wg>(!\YQ`0@sZQJ4rrƠ_0;9PI`\ 7uGIF5؜)*%&fEwZI%PD0@Ad@F^Xz,*U P]o^`}ĺc2`)T<8]q=c{/Ba x$V$J flZiBHi#SXtkl"r0R wD+E@\fBꖐ6qsvo9 ;:gٹ DH ٫W"{/yDì7\' 1LpY>Ac'3}K=\K~!(bzw%+0s 0EhK;>΀ ^.gͿJ``Vtb !b&\ wguM޳EaXVX|Λ\xpԧ's[=_ɉNNgŏ *s.k%ښ&|kȊ !arvJUEޭ=գ{f_KwՓWE>0yKl9Gլsza (+1Jmːe(j]fY!0< N0b9͚lh1|fmUaT-WBhh$-}~MjWS_<FUIdK&T/٫ӗ_^g/^;Drz : ̂K@jk~[o_v4,M[ciu}k;r˺vni\D۫//eRu,?)t#5!|r^7SPz*% QClbGm6Rimx:[RQp픓&fDe(`-0Cd kQC{P)wR[3HU~6"a%FBZ CpL[#꠩gp>`YLIC<u_;$Cy`I-aLDE6rςJ δF0T>jxˊQcZF#$C!: BUWg8QJ(ۨuꦡ=oB?[tʷz@VOo9Qzg-9LG)=OGuBT1sQwc`\b@-,2䚧HOV:n a ZБ7\ HŲ8P=h:FY>ǂS >߲O}+}kˤP|0!֗Qfz^zq4 UZ4;ҽT֗Y{U# TƗsM 3s KnӢ Jтd k,^V8dz6zvw6Yhm,M I A"c8j 9L`kr''@d"lF9DŽRn&x92J#"( -J))$v GmEjҍ\L'h;SVjC,ա`V}H!X*Ȅ;5 OSٻ6ne~6$?Ar7JSw+ٲcYEYh,3 w˓>T/䏿qRg'gWҫb7|=^8 vT9զ[;η\ P;mO訽5ۤ^\Zؔ{le{)Б|! @̮p@VxJ:o)(eq+n˭2r-)rrpaKnty3qW:=AXRR^~_T VPDlJW9PRZGϜwͯ^Ά5emsŲA=Е<90еlG'żHou2t[Ni[lJ ֦_G{g]c5>^o%o?cMrWgs;iv+l'Q2E/ջJl/{=}qnp퀯~a T?sfWw1s5Lʹ|m>aޱM'y,nxQ!sq9= -`_&2)l&|7?Nx5N9|zvu\+G}7R<0|y‡ Ϗ6&"@tDq,B%y5S"OnCS|턱 O6dwEAHu&`.2AM 0P#y".gٚː3E-!R:KU) I, GEe4g:6POE:^VT _pzzxk+9״Z]׬ͮ o~t售msOGEڱUsqN GIwDZ4کgw ,mgl~g._ʷ'h5ҙ#[]ߖi7X7+EO\ӥ.:;Λ_zGonJ>Sw`Vw [[{J4|JOY7N;-Ml`즱-Z7Zx`PN :b/ C4 q\3:hF|~d>('Ԑ59jX d"G[N@do[D Z$Zz,.Dd") 02J5LJ6-h&݂q>PqS*K_|-DBLjCWEI&`JIihū#b ULS]a| AISw)AADa`A'IȦ5ɹm娣>%uS=Psy#l1jcoQE\m Mqw !0Ӏ9|^*6ӨR \ åPhIAbLieԱ4ĸ#ۃ}=n 3 }?ͿrO 9:9ҥ]Wt!,H`őpoWHbEzU.8\$ceƈZh42 SAVJt61䑴KRUVOpjSt. }"Jba3zCCxǤ8iDFӳfߜOB|v|fˋ?sKf_G`U,&)i2!J9 "R%@֡QdO'Yl8iJAJ@$J2/s5ԵT% l}85A #UAͱ_c&c0Cjb,YaU,HC@'}*P3ɗWRfJ b4J]J_?yK#P8>]t,[br]󸴀'S6 nN+q`gy|HG7c*h`Y@~ aLFzs?v=gɃhGF⁒ /NoӨf(#$m2$LWG! 䨑gq4TZjSi0lhY2$SQV"i0བ6 Y'.wz(5['Oj^vm߼l=5j꾽:E1X7l"<@ c^ƿAIS&5LG[AJ6LEeR'oa&_FJ (eMf#26g$v(iMNǝ<,ppÊ,?<< }99]+פwY )1*P uh59^Eu%"-{/:hk-40Eo]f$ȎHVeeJ[hW[~:t,x)uh6d(I U)&V:>:SBVl6jڨiԴa;PS/b 1c&(TAJHj^kίVl6QLB/Dž%_CDfQCNDTIIZoV7FU $"HYC4cT3^zZGccG0"PwŐ|@ }m7-Xk8]Jy@ $/̷U^|v.)b698YW5*>*>*5.ERp.[[ΚʟHrV3BGhWn7{^'^5wBA32QtV) gIG(*jL_Atq2Icܖ[LʩHZ%gcpVIX &[Y;Pp4m ,_\Lyd5FO'_&n;tr1B ]xՠ>%PHTWB3[BI1:nGefU7;kD|X`|Dqr(E{AdD0 / 66d2Fʑ)%ֵQGh7`*jn{Uǚdu7ܴߗqHHMlM/农@g|d^\H1/Jph֥ }0{&ʹ85c)yRSnU>O֞,^d/.L~zddO4)l&| 7?Nx5N.7OϮ.O's`>Y0wLJypglb40켰e.J\{_Xn9K4N:]m ֲJJ: E%Bc2’,Kl)#x΃W2~cEV)k>ƨuJkL-twBZOGk#f]R|nŞ6Iu=v.!~ &_OA^Ǫ;_͊F="RK}QA<)k3R4{)z{ >8<\*c<>$P@Fy (ruM'c[D Z$0) Yt\,IQ(gaĪl^[sJnQJ K\jǽtd`UHq|HJQ"cDAA_%*%>g%T2mk\lLqNbR)A5N4$ѣdK|sr^a!vuK+q6l#59$M߹dHʊi-&&"N wڬQG}J\۳&|i@0"Wqb6Ncb$%KW~:D̵ֻXʄŐtY@%_4tiՆ%de%s%dBeMQ2 )][o9+¾Hfp,e,Od+xOe.Ki[J:du[fb] :&ރ2}!ͅV > OפGcV^tD5yh<Y-5cܐ< v#ywyk<3.be6* IFF!tK JA `L^*d-h$PuD%ƔCy]I[:KLVo8ی-Y ͦR#̎ƛ ">v+WYN g=ҧ_4+L`r%ڪX !Dyxt IQdL IVP(۶!#/MKi |YƂ 9 U!Ѻ Y3r6K8+t(bVVܷM)C ua2g~l̉Hv(1t>:v(f<8CqH.:\^ƯxG3V yUW/>e-?7?n-ٿϛO}5^=yI.琳5E"H͞IK,U)yW.I\o3^ZL޼6DU$?vt{8MyT@}Û,fu]}Uܙ'IN|qBo1|kZ$ܽ2]4wL+oDL B=je?M01AhH 6ZQO=kU?ޘeȢ+)+TTz2!qR--JzTk،-j"ڡC)㒢l4Y^n3s8\7ta<~_kRďSXݙ1*QQ)\aE/ |o "a ?VbA|M8c]F)A:;YeBI.v'7 c_yBX G-}N@*H1EB6[tFҗ$-dlM}AI{NIk {L^3|'j·w;?Yl*I^)Wcf3-}&~6Q==zh kRkw~LP ε\'YmmgTPԁMèJL4&uF0M&u[OBSϑ@ds /uY'ՀS^V%yv%4Kp"˷.K]IL$l_K^ZG'4#هՕT9pFZ扨!fQ3Y $+r̝FJQ,=Hn3.f~}<k]*l)3Ux|IuIK*2=4 j1B2#g^f4;yP^av mT1 xXΚrW f=NbT+Rf8)sJ%XsUJ2jjT2uQb`"i>/z 1[׌Tp UMҒB4Dm(UT2=j5Z~'`'4 JZ@>Ic)ZD jˠ8(y^LO+xTeFE?~oFۘit"$zf(K0D%dH"4H[v6]N/z+׌E+| Ƌ*Rm"# o.f V6Q+eO!PylN<.Nգ6J4gjX)ק4-]Kl|{ QsV CB$@͓MV/:{q:cAFvdӱ5]`Mf>QlqyTczR>ʓ}w]mJ2, &EvmO,<;j֒pw mCЌ $ Y 埈wNT&ot)+uÑVlv.l27mxj}&/\ӗABukz.7wm_o1KOJzs7:b ճ/ͽvf'6GЬ_;QZJ\ΤI8#r %']ICAMaTuuKQJ`f mmA\c6%=ׄ<8Qwڡj'pfB"KQ%%$ 4Y%E"'tSZɺ*_Ibd0,cbEЌ؊%/Ch)RQ+ =RwxՒ2ffK;6Σi?ZE xv֛0O?Wc0ѿp7䲺҇ _@ޏn95\?,8@ %BCt|+<ŵъ929,Ƒp'Wr6SOn: 9 >T.ʓE%_Jrw~Ƅ0›|tzErK߯uSw]]|ik7\}}, |Yۊ:i{ J3]=Z^OE}+?~8[]|:8KLm]v'I-V(bVrVQ\n8O!֬[W:YZk,XSL{;XwZd|՚'NjNF%^uV](vW:y%#uaWYQ={#h?X~Çן>RᇷyO?#"u$x|2 x fypH,o~ili^oFbi-b÷Xokֽ+>,1]^~?&s竮au=#c\O$Wl|D_STu$*C< Xϯs䍹H7ėÖO}U8MKoҪ W+0T * {Hohb 96X? 󣟧ӿ0Sʷe󹼰.@ }:A-*"Df( й|0Sё~D]w{<c= hTZbYޚ` 緌C {Wu pj#4㉯dU>[fuh"z<^2 J`h2*PTNs^Hf8h-c=&]CħK5}eJmNKj22[pcfD)g- :)L" 8 i ւ/=㹁2v$ף|OgkGu:/4|'WMg )`4ۼɽ߽m׽dyTo}eȺzVw> v%&i}]^5 { zwz:+xU6pض۶{%F領2T^/k|yf :v??Qǎ7}N}${FE$|ZEE\ΥH+zOc[ފ"hmX鹸bcW5wR9H/~^13|ʳ|a/,3dNjf>4l~ߑ ȼ~2F0÷xnd({QTz' P(R\I*SZAs9`5WDQt. 3ݚxHƿE/?/b䣿,GΰfMr_K (4ʌ)7 Q[@AZX%qg qY" K͞WX {ohe|a9ns@\.|ݔ=ۿ_z.W&ojP|| b˚4kJioo TA;kZÕN-N /1jшͷY4[#}-~m5| 蓮/hfȘS'u5zuybjJJ։Z~1Yһ^5nU[jm<e_!dݶ+:A7-=n듕{r,Kz*o{:x9ӻM^ӷd>N3:=L=H[l! -NF-|YBպ[bBg؋g~f&"zoNۼly)׭"4QF0J sNYeԍ~^29秋II-x֖ǁф;`TI7J7ъpìa4 <%7u@'^EIhNj5ߢaYXwZvs",_ ͬߡ.@ c%ka.RT1`w3 %0$wY$!Y- 'd+y[zlߦ"qy`裱&.Hq1`FE+SI8o|:b f^ku"66Jry+m.4r o8BzpF[qFT$w*".s(*w*" 7^H~?.!͹WڏΞXs5׿kYv$I@u8IfD4-\GZ߾< paׂKXKwB=Lh|SxVyA׌Uk/YmL eT, %k%bl N^fBU 1Sr))anS9#&.2ds2YpBUƫ&Ύ4As#oL_Lf5}He$nrGn{e@D#+Oc\:#FkdF^5Eϓ[b HO)TGen2_O\?w;Fuqn81R٘ϻMA/z ?n]>xJXKJZ@Kh -% C -0N~*-;uf'>dxOs:)gCh {z-%bWjJ-_+|ZRWjJ-_+|NvhVJ)܌gn9(󏋟9Sz߇Cyq?<3K/db={._4gHlO94>'bʍO5> 4IaOA'W_+)mM.f :dio:Rر8)FxrޡPBa'Ir_PN+奈Em@!*` 2ceB  Md*0\fr8;p&ޅz2GACzνa;ۑK3,7}2Sc#`<ޯh(Dzx>]m w;~P^yTG53,l7I=;^R2*s{Vz<)cb.`:fEt)W82ږ8-fr[Xmek E][(jXQMryHwLM"8tn86LoށŮaL@Ff.  ŢAWQgĘYMhU)M aY;`'LTCòTo$Grp$)@ҤL1D&(B tedi 5%Aea p 0T `*0tYJ(tXS֠;MJ ]Pyk$S!1'TI ֢1.GfٹlE.ȵVnU~I^F7I4DLNc/M.@F dǴ*_OSRT&)WC~е: ]{ mc A8TǁR {F}?Lneh̛PO/tx :k:)njPnv2!:c:D&;BF& \C(7WAg ]dGeU<%"c2z6lY\cN4a!l[)v,&|jġRrf9%+bt3v%۷\-d+#csN 0rvǯ8JY"} S΃pLhdelਁӂ#wK͵ $  '[ a. *hwiX ƒn#at FAHNx.1$rDepTMlP4]Ӝft-ZvD 6'm|?)?Uz+`RRֈ3(2wtI(H>$SgKO{0)o]w,f4)]%XՌ[98P#/|7X(X%e̞M *R"o0X%>lM/gGzM]c]No Ȁ1nַX4 f  ^-ɞ PXɴ停t#xD -3.BUP.0HV瀔\Hҏ,/AY! X[cL“J3ȵ]jlwiBw,t[s[quӮlo(Fgzsj'-g|441p+U}9ӛm|&kc9sFi6yv-e>7+LkƎOgzϸrl&Y >C}`F9tJRT+j6=B.o]/<&YxsOe_(=븁 z{kz;:nPT[Ժ1 *WIcXWHZ@E  vƋ Tx^P#RRQȨ׈rKja$\iIpi捪ۯi2d)DHIa@>8Z$k/QVRjCC}*$U iroÁmkʦ:#EhVdtHɘȚ=M b[Z>*hI7ͭݢ ipӳKt̴1O`Tú`s2:X1-DCWoX'5zX0)OT&imDbHWFewu;%JI"Obl>$;~<3;4fzwubHETBXFӞ7p䳚YMU AY^[A`@r6tco{K, iel#אOL\9 wHb$ l/ndͬ(yYL, ML-7e?)I߱&؃To#q'QKA D=IAbPjrG? ӋiMlr^v]6ﶀQJɜZgH4 :zKg("GZ[66Rc۪QȦzURΌ}R=wnmH6!!'mheugp}X:춪y[NBJ-cy#iS 1LN1D=*p=u;nrA00&aDLz&<ks:ߦ1l&{j Pok.j׊(D h'索 #i$3]Ѵ[3HA )% 3âҌ;dp0AdklpYKpafniߨ'gֵ^†3ʴ&&TP4wť)SH ΜH=+ Je߾w$Hi#SX~``E!TyJ(eMjz{Wjk}?bb?㿗". DhV pa/#? IKmSD8L:Vb8 Ic}N3Tgƅ_S *}riԡk6(b5g2bHtD0OD1uSUԎ. tếfO^]LO_,g7IG 85߆8pKK" $'fjxˊQcZF#$C!BrU+zz&GQ; zm))0>Iz(c=Bp&ɬ#O_Rq(zcYQ,e, ,rH8+C𠶙9/+Rqҽs: ko>$]-=,S^H$*RO8P=x:FY`'h<}v3+V6;c$v~< ~ZPP$)K*+jdPW\>몾ͧæI_w_w}@k[ݸ[']鳞cX awWR4wa3IzB-w \CgSL=)%;t(,vq`W] ho |9R hizMrJ31K6P/aN(EKf-@[e:qC?& =zw͵&4BE$qs <$(&NOSϷ[u)M6,й/+̳;}KnhRVGvork*{] fOi<5N>/rɒ5 ܞ{0z`$J)YԂ)Djc?VW;R'vZ=qr49>Nskef7~W|z⋱ҷAm9f!X`T9FR"`"RSF@B-!n Xy2Y2@ͥ˦G -v+03kq>Ӆh3DڵuX*}Y $2dJE*L<iX{))JnRŜ(aH||f QKv;YLqfX F&1C7DlX Ї*emgμ33cf|:ƼYg૽'fߙ)VzNRwu/ڷir$宅oG<^xuYMéohU7S^FGi 22|ĽF,{v6iKg/ ݮ3 SjQw<`_ySVJ5#US$82+jq`=Ži_SiIY' N`աqxZeEU񘷧Ig]g{ʕ!QR`GZO shf7LJӋ~Bzt \vEL'h*lNPJ3ӊ t1f ^Tf<8U&;5 QH@ Gg2w't誽y!mASßsc̱rz~6 77մCa`NH_ >5[7a@~P*ﵥ> %2@JF/tJÃ&R*Cnh턮M)kedumӬkJ 8N9Kw&[IPq[ճ:z|?~?׻7CjPդ/ת'o㌦sJk352 BA@(g#VWPF\,AQ :J0T">ÅҴ,/\`rPc1Y8x%) C.Rsc#Ke8ŌP%]c"xZ#KǸu\cqxM.K0ӋȌ3bFuRoi-$eZz0J@KԅEqsQ\7EqsQ\7ݢL=vL{>#ۜ!|H_Z40>o<f[Q62NKoBT b&{(a R.Ȁ8ݾIDj#W`J "y$ Qc(,<R5g7p \pKЕZJq~^}dtL=u`x-Wf4|Se%|PB^+a6;#索 #i$Avs`K(Jwׯ {K|NHitɔ 3âҌ;$8"` 2NVw_~ T_,]}|uFaeZUS*ypû{^8‚l<+ JeO3/մ#Yב)bR,R?PY@{PآHzMDPhhH2&a0'|@ l?^ 4 !U/\؋B-DCRx/EDô|BHqӠ ՙqf_/n\'u.@_ 淁G;,9)Ul8)8F"28Cc(n`*堚<.gJ`PVtb !b&\ Q0%b,y71`6 .3S KٗfZMOUR!toK߶5)PSyP!haJ*Aʞlx 3:\Ie&0 Sm8 T/p(RH|||pbs%Cc%dJ߾i2{:Y&>' &,&M>.[DW5n4zv]vUaDbU:\-9E+)"Nn1IbR4i6p<{\ O}8zsë՛G>}Z'48P .z0=IƆX/M{[k4W}A.ܱM!֚(=,_|{5Kyg}Za2·`sf߯CŖr! l gzH_L33"h1<`=6,KnR۽<$JEJUh[VWQYq|ǺlcUA6x\~B]p,M.w`` nc_o"ld6X?ӷ :6.)\$|*EJ6p6 T !2# 3uʎNsGGy!9<0z!PmZR"9 StD eN_tߣ SUVvoȡw[youoeD)ކPr^e,4?1[ޔMD,zy@AC6֤&:<}\9/lJ-ZXott 0cu-*)hsAq$vۺ2)_~:/3-ezqv%5ElB3O"t|_._\~XiEAr Я~*'vjk}68ą|?5dísYz!YeÄŲ ~zz` a6ӨG} iǁ<#uVJ EUR-P8e\!/҈ꞈꞴuXoԔ/GV*zؔ@ XsH%r^ 2=J!NHxnXxhK6~QU6#]fkDY#6vFut''V툴!#?O{[8k8Tj@ppM=N{ӊi6N솾,֊8<99[hu$d^M}|v,&xތ7N ~usy[iy7w޾x{wc ?.[/x5gzwvI77C䃤[7ꆏ]2߇B0ݪM-ʼv5iqmWo~s盋{_yʇaryg7^<y/ =tÍK.ӰT\n[=bӾ>s6xsDžvh6}K۳oe31- e׵dn@B"< eWoږxwٙe+ƽ--!H5vw[{ - }`CՔcfk-Z?VSDKahԪ9! h r.EkK}F5hBFq-ce!'h$jC5:dUUQo7rt')78O(*돓wD|سu=VvǤ\Iϙ43 >cm3m9t.ך@E*JAҊBֱ" Z$m9UE&*C$NU'Q{"sg{og"gߋ9nx]ؤt}y~zb &f4?ʤ\L_^H QaG*B~ڼG(edETaB㾼Dž=8^]vQvIlq58s0\gC[\QĻo*X HLɳ'bԎFe425ylzW"g3 =DŽ;c\-H#?Sz>[1BA̦\w9Ӹ߼^Mfl] 1:(C (gHlj\fę{ڥ0Ǎ3Ubڥ0G3SRڲf6!)5̶@v\E6`qr#0 LA };3GCE+NELYJu%Y"mIY1:F;nlƙ  rq Vv&J)2OKXWsCϓG ) j]Zt"I~Tl S :y@CHsƐm"{ñx,~< G̤!FRVÖA"kJ5W>tEw'T P)YT2ƛO@8}]؋-z}=ۥC;kc+O(8zf5dLtd3ښ *e؋t(Yu&Z7G%j&mw'vØS Q,U϶thsc N`, YU΢ju#|e Eѐ {Rr6lVK"Ag9FΖr6?11囇UNKq] YxѕB2y_6J4g.Q6#[E|>&wOBOmmE&zeo{V@읶96$Tr y#'~3?iZj#b,Mhw 3ڙXDh7$Zj5E HTm~ب8(ihfN≝fN\w4sbWx3Lo1]XdW·3@ՐY,#l!ʦSչx3!xͼţx\ٍCv,Ͱx3\2. yf@e,d%x _Knr`CΞ`+>ڊDe-o-[E)l<=)O?/+EZQTʖ$.kI[#I洩\V^NEA圝wbN1HQU:jT{,8 +d^l|G!SYSz5|Ut6W,^(UOʗ-x-fi㈎yn>O-uG-!('F88O͙eG#8:,8z#@P- *t$:^e@T6X䲾aa )}e%ͨgܐ=тq6"!I[%: D'N.&WPt~ˁ[3po3?<{)|]ݜZY2ɛ&6V&1[(,[\8 o]ġYp50T}5^ŵ¼/]U'*/ ziẂ|ܻeOh&)N\|S"TcXtp64_'҇L;?WSFo}t^-s ,}4gxz07k)K{ΕJ-$?}˄k?_\OtkZȯ~t%EtZA!3H',n,>af,c0.K(]転PEmv;&/н l_HEɟ ޝ:(d)(|3zAHF';vGa%-#ŰR9.>]z#ܬgOW??7a5K+ 9ZQvP\|3.p1 mԄG@[kn_xX"iXXP'eSDtwNbEH7!ڣ!PUNAX綂OcVw˿O L;7*^~>~w,k]O9OC]Kɳn>W̦w]FnyN~KnRq.pyU/ݷq5/Ig'2o>gœ8[wf6Gͥ5)Ho_ϦquzL{j4cڤtK.x8ÂE=Sl}N(p:Alf9m9Yv(AgHdԔ-NAh*I{c$O aUѐA_R(3 UrrRqL6/9f:>/~7XpZv IDI@A<&tP(iRR匠X;<+m{HBkeY*%˳z\pQk&񗳕&eIB^sckrJ"qKɚEy!E`F%CGvlA!YdP6GB{^S"`|CCڅnC=5֡0E@"ib(NQ~) -!r !zi@%v+FWlfFf$][[|pɗ% :KݓƎjcLkƀakvV{`3h,BzfervoibArӒxt c(K ޑ  ('EL/D73>7i.{/1X)Wr֝9U!Zggߋހòmjޫ!ce"b;Fh0kvX=5܇j(rW.jMՇ̫Ed0.'/;WG Yi 5P[ly +5𰪼-8юڒO  O?˛UPF5qk*.KO "*Qz\Ff,65 ~36M›(/8 "I"ۍUs Mm2v3^BnQ୆y#̋-qNh )i L]5jb_]'L rZ{E(#cSW={앋^Dzg>\qTb1q.-LT|T)J`s2PCa'36g*QQ|a38P>/Eհi/sZ? 0O؆-0RQZl!qFTU1*,hO`$2кmvCBRBjUݦ6 D*lpO8_ Eu=v3qv{8dl+8k^[^{@>uESX@ѡJ(tkRdTV GQB`A5kf}P_+5f)Wx*~G#N a,pؚXC.d:;`%Pmb~a۫ު,mWma` %_Q+cR{(hH$;R"j!>wP>H`t3 mRPC3N'wЈ=W-:ZYrQI]R(R8'h XKNbm9| '`pQfD$]Ȩ4c0O8 JdtiLxr5*xo+R%-iɊ:_C~}\  ЫA r,;l Z%jWSԈTv.9K0^`~(*YiKAR{*/ZEp)Xi[+ldeddBmRu1V^)2x)mD ᬖ`kYokV>ݰ /%ߒJb_/Q]G<,"HN@䚟XCBថ.4"GډR֘Q*=Xn3j7g3Ʀ #bG@ϫb _\&:ij㹐a {hmMUieșVY6yP^l+.g댉׍8{YoW3uzۧ*J&;aRH1|F*Q)pVkBZS_9=Z*`UPˈ&VK}F"Q5BA*Baq'}Z͠-C4I/HXCH( [Q! DCVy":gx ɔ҉}h%efF,* dOPXkγ@%E]RqN Y l>H@a.g\{Leo39tc㻮'*REv$< ]V v6 sN+PRpȧF̣yIq4ӐC[|RoOKu^SHԌ넄!H&FV&דnp]ŽXO٢4H6`(u;^ j4 8rnNGJ9䋞'kII%) )rh$f9 ^P˜E0Sx'ےf\ I@L {_;NTI)+5[+q NܼI)u½pd\ uxe`y6BYTo>_]vkqёБ5>:5)}<傠ٿvNG9.j:#p4#r %6褡80*`ed_jGRA BцR`̦D&Zje0Ԝ"!Q"DH5G '*I[E}M'ژ~aoZ2COş^ /09T!BE9IVdHK 61z{#iaK%;vrgR-{+ʹqvBεMyrs(4JzuJGDTzc ɕU*'QN< %1Z%-Fk *e4dtt6oזQ*1e ƺ HS$%HF~QQ!m+ti#nySBpXwמF^Wfr:rOpZ m1@#ALskY?2΋`\*δ gw,ȟvT?{WFnE2|3/mshr_m!Jr W+Y%Yֶdo8Zy 9yU|j"7y7~L8x\mE0eG{4u9ۑ#ގZAIi-#2I8R{)\+9l9Szu4u~~PH~Qh+ϖIhtn;OtZ~gSVɫtCuT6*~{rd&OqLrvP\lQsmciO 9Q/\\E_zp&AIAߘI`1B,=yzK#V*)sRZ޽[ .}nퟞf{QYvUٍxEY(z.bLW'MfRX1q]6 ,*qy:|mHckOاq]~йMڭ9Dp:V-ve6ݗJ:uG?Weܴrr`QwF ȢwV/h_YuVX:aE;xXV.C.+JļϺruĂ # P+7xy0`MWf'ۈIm6wb0IuITo0;} o1Wkt41u+;Bchk"8pṾn<5lONx8i`on&e mY:fsHNX# R $`4$Ɓ9'w)Uvcq|۶G:}߽n!5v^W=} >Ђ no= 0Ox>GP[`@) %0z<0lk6HWvs"wEpØ=QX 6Fʵk <)t{OD"RfU+2:paMhĶf#‰ଶq. Ȼ MMh c|KU%?=\,.ީV QKaF\BwF͍&&X 4$X"2.:aX.sI^J߽"Z@3 7srqO>C)U,$m@x %"Z(2v'x'7iUZ &)~Hsnaxa}( 2yR=%Py@28Oł39gP@ޜ" ; q^ GbהɿZh9+yDg#1 q9pB:rp(F|Jҷ [js*.lHY™}=mjbBɥDORAQ+reCSWC\8C'PP1խq͈Vu?yW=xs9:?8g7e̮~Rsz6[nvfYeV3ۛNn7^!+zD WuVwnf b~p,fKu&=s_=u2UBHX ?՗C_; =/k^*'?.| OpwooN㻓7o?POz|:E` FoM?!@P7wt5lеM yE7vovԶVooz8ɉgu&zŝ "Ld0u_QQE`**C~OTERk}y4q!E"fp진ӛWziV }~1 ?Yu0C?k+ЛH5DD.2Ǖ;<߿+R)l.1" N%nr.i@+:u;2e%wqd)kƉK"@ N+)Ha%S *kYiJV* =jj֣ ףg͒m s [=MZO!*a'7}G8l@C/Ly>DeJ@[:L`s*9Oj:eM(&tآu#n.!j+Ȑ=$khe:%* q P)AOpBʠ\- uxre7TZŢV _bal"M E1mJwMϽ Li79n Sa}oPeص:j>D6y 3?/7?e8f[ʹ6[m B'zJpu'{9Qai8/g ຠodɴ%XJgN^ŸNzn\'ZxjѪUa ނJZF x@(AQCRZ 1U$-m1F-j]eEZ9|/6Vphl@`9lML.Ro,S je*l덽zcYWW`#F]AyuzUV]@u%AG*,ިL22FB*)1zJr> 7*]ejnJCUW/P]){2`F]er%uU;2ݻzJSJtgfC P>'<5+xh:V^C,#/LO㟊A4np68ߡ%U#0G  'Lcf5uKC mzm/GwUA,72"ti?upS]A !Ar4;z_ :L98J^dmm~*;[1aPg*=-[~~%wL,8 QJ5"V__k.C414WLN&W}H-'d1Fc@a1r P{2싺?T ުgK/D0󱻑ӪQkӨ;Q D얺wPWUW^zEuy?S?ۿxT3 " 5 oژ_\bJ`1B,=(ci$V*)e.%+qSvd + :8u{He#{n傅H% 7Q$!%j 78]? cݪ☷~M"tNqR; \čR EሢBp FsvL%pBРpғ̥6ưƯ-t,bpPRJ8-fp>ح _Oz*Z^D  %UP$(f4)(nD ɓUip 唗Z! 4 mE*t<3`(UNhCMsECw)]h$8ڗN3.+}7ns{SE*S~]6w+@4dQQaB’ $ w e;ʐ[mLrKD5>? nPJFv eD֛J"£N5K%X@ RrfI\1Τ!9b "02KYN:Ft Ycl(gWRln ]RT2j"w)Zq$ h1jA*GHЊ@"e_n@O?1 푭*fp@%BD%IJIJR}@QDZA-?~R'-$vt%-0$GhdT2P7 3Ex,Z(>ZCc97$j071ZUi1 !XEAdRp<L5#4Id$vD\ ]Xi] 8΢I tDkd T6ADrA J\@Bkj3o(ڭc J߿hC+аtKfm${6)&e9HDN7 HhŭK'(n`ϑA)_ q:J DpeEHZDh l$"gmmJK;].}tN@CT("CQIAG猄X)=Qw^Jl٨Hq\ІHr{ItOZ#U!2ah9CILw(].m n GNp;[y9u|Fs熯rj[tD舢;E -O/q pԯ&QtR"T.;"GO &'UI( PW[D95ϒEBfn SKb6F 5A)΄6!qT@ #Y@o@cP'_E/@cmLnL"l|ؘf>Ma (f#8')&:ya#<D,lS긶6"5z{&M=iXf0䖝V<3b缜%oȿޛ a / x1˽PeJ!x!\D-nvZ%ҧvZ١kk9xf2j#2#ZW2*I=mBqJ/aQu[,bq9 KhQ-5;2'xlU.Ll2vg *~%DC͛ٻ6$@FÀq:sp\gn aT8ma_Ґ)kf5=]տG11)=7;Ĭ$.a9hәbvgEcQ Š`G›ޕ@8ƙk3%?δaqDY3 FD0C#e0)Yqh)n[S'B7zf֡o/ȟ3E-O]b]]oyBWnov') w֧=TUU@-*cg4]ZKR8Z*qK]sb$ 8eL $]tx M'YNTHfjzĆy>/f\iV g}uUb_O?v5)zB-NjH)wZzCs)zv XPFXz'ѵ:RD!S&aJ;f-Rk R5,wtm.rV)/6!ukYuA J5qkE"hsZQJ4rraA BF7% 543X("4d03,*͸Jb,#F(Ll?> ʏՂJ05Mnd4uYwk6QP~X( 9+.gLqaPXh7 F*<]"Lb:f݃bB 569hC;Q .{FH.Wrw^ߠ ( &v~53"R|mj!|m.$E &_*[Ua@ 5Sj6.M^5w*SWUG";1#"qL&s} Y +P-=ektv:ԝ Wvt{V@^.{J`! H ˥CRL9&dΛP¸*$8O `¦pp)M%HY}z0TP!@׳QyvjChR`G?n`-$LP.#?Vi=S2Vy_fv\x r-ŕ:Ⲟ[sd(6& ŻK3O(2:FHv#]u CsчQ0iN>nۉ.cMGͣ5jdz*\ÃI}B{GAL X;7w?1}~??cî'Yp1uգI1pAϻM CSvOL. 1][)C7[+盯ߏҥP6YuXf}lod_gCr.*N[]J(.pUAVoԑV=z"E V4u̖T17$ [G1 ?:sd~V6ϨG>%)wRt맦L w/[DX h$50ʴ*:z½"EϔA;{#|>24oDPd#,xL nJ裆5e4B2C,P'o<ۚFss=t q ~` (*S}z\(nY*uq?|c`\b[]D‰uXGy,=طsaV/cܥ><%e\yå 4$qV{tH'h4}'X{ Tއ3Nb˰᧛ | ڥ- >Tl0X6Lr%[}׫?ƟƓ/Lhwwػ+aHcm˽`Wwr=i;%ckyܰ;LYqbTDjm9;|$T׳Œ`^޳ј_fu^rÜQ [Xca"|ZeuuRNʟ.>NzgQcF12IQL;isqz+p~5nT&I V5oMVlvbRSutvcL-.y$ݒ0bQ:)V2hA qhʣT[v1YP>tf.nN=^ڶfKap~c6 ]6}.B,f%YM7'7w "&az˟KarPu(MSƣ"[4zE$"_hY"M O"Cq"q*;O5*Cz 2 <Xe񨔙'1"/8+2: Kr[iHR_^Yk@"IQA)3ZX$DcQT^Յ4ֳOΕzգ]3tu,t.h4}fky֑VJPC#~]S?2)<f )*p k-8Okgb9Eidk@a/Z)Lq^H%#%M(hxFΎl>$ayR9Mo+[\G-cLo?^nwB>uxߡ-e Kg6f޽ǝSsȍ$ ݕ*π bx]j[OԺ[gus6xnn7Ms=/ܼɆڻzuÚn%i'[dI#ts& Ssݼi2kth oM>9Xל| Hر=hk^^~cPzq A0{e%C|^ÓU,c`B;N A1}MZ= y<I& ƚTra`NR%RlX)#E#5{e<U929EsJ%J'%٘D.vЪ󅹙]6}vIw88frkTSz΀\ԔPk<&X2! C=i ,20{>1VYE DQAsglrD`ISEL\l{q70v){ϖzM~o#\^kG,J8fZ1qv) 5fP(qKD k9RB[ JGް+B5\8vMVEP)ev}e-/Rx _T^?y/#cĐ(-ԁ+*oU9E+\2z 3JU IMWjxTGO> HB9UaaR"2+΢"SBt=g['{f/^VG^=&8_LãUERygEVbAsJ|nkES,\ &jZRG S\-&x#32x5'gR%c6rvpވR" }e!+ 1e ̓g7~ّctyy[ɯf4| 7.pAX,A S%PJ9@G.AN9ƭ{(N !DaTJؤB:7#aV#"^>,=`Ŗyh>~Y}.v'ݜ pج̌x"32.pYWLN 6FflFp}V˽ac[ԖQ[Q`=C%vLp!Z-%93fRPL(L02xX6>8!B3(:l5)E6jG`>!XDP cmLx[ǶHxDG 3\pJ莕6(NFY Vd8UBB.2jiC"]ц+g`lRYs#`7bbٌB43P%{mAo=Q4A 6#쮸2r/*'e 9l=+䪸RrUֆݯRuB3B7Z\U8C*xY)fq17oL.J jE#3";^/O'`P9Zhlyω,s= }𗓳G/G+O#/w+;+7\T&s6IumӽG,x{iӓx{^oD峗y+%-b>yP@+d]m41ke N3-1rCо=P1eacZ`\ɕ5J<{am3{ܿi}AΚ:<9b= ^9g"5uCBo0P=75ReدmO^yh%ތyE#Q iqzh!] -.eL{(Rzjc@`.d>UqLV}|Rj~| 3}-RlUyWo&߬RO(zx~t&L)yj2=03f?JrݺV0^'*_0+,D! xLϓ2q> g>Qg>:W҈!kspqpFkފxO77m^=aa8Ӳs3X֯}=n{8-fv \򨅀IsiaL ʃ9T2\<ϒ ;MLUULis`z)^ \SoNZ!' lK ⅯxnygpeWW[O᫗-P$0!7:ݗ'wۓyO> _u+¯Q4LNv}p;͜k^ n(UNI"wԑNAΚFWӞ򝐏y en#sFZl˒kg$ye (:.q[꺯pLY^ +fn[`b"9&kbYB$9!87AQ  u30toC3CBr^ @*yS6ߝ+_i(,Ku/tR:3ߋлʁN)._4 "R"\fbLdܦIiv*έՏ rYnwYࣝA c ކGS^GW5E%s2In)LՠSVxi1Eh60yNlcR(l9ux'Y3qFYoW3c? [cdeZ)z .:c9keߘxj 'mcEZ}ρzN+ֵ7UVGLb4W1&A 1u#jF1N:IK㝭~6W-=Բ%W+wO\]!1+OID%3Zp-| W:"2gQy8cFqMV:UeE[K4|Nx3㦭 Ή\[J $¦b뵒Ecds!ţ[l?w}1f*Ʒ(<&"R Y9d/P MR+IdF'qdv:ebb\z{W$IU&*vDLu YAVp?AȗL1of=9mD٣I`#FIOIzU6 >s֍nIg.xʙ+"c*)*3eSfk!`IJ.7D3K1<2 :22ſbp.ifLb:9IY  #flDl,qm}SjZ)>ٍku >u5i4&?}v5f s%}0)o~ڽЀH_;4Q RNs&Lk_*ё,92>_2x$+*`tT<=(QO)L{Jx.7T)Ʌ;_ ]2F eHI) !+)JEGј5g39&"̯]_O\P;m,-y̯_~jNs+~M4#`rΊAΙFTCݖ|ZPċ&%AZ4&;go$VٰtO|b?rc|o7Ɖ7(N<j9F^HAt1Zl' q-xCv VX-y]'MEw~յ-+.݃)q!ۿGgXjmvDV 'u}Yݡa?M @V7l栳.7&L{(b֥V7X6xhP)h' VQ,||`>ܹbXp?pu86煇un̅[LCЇ YCd^"`%sme D:8ҒPJUs?Ykj|A`;5׾L=WJfeYK^#x1NjN-%Âv! {$q#Jڪ}M2,Yrp !))OLjP)]>$bB%`T Km?q;4\ N*Ek%g9tKiM>gh] $C lj2FrpY%x'SAӅbhtNS~wUÑ]*E8g(*CddRِSό`P- u)ХM"$t@!46z4j!Rӷ[K/ٺLdtl6h42#QH9 t'.xHܠOa4 /xSa1] ֆ1Ѱ*%}*Ś]1F,lyKUkwih! †QggK?)\?2ET=-QsvZ.xm?8Od|ڍAx ˡGK=p c$z0۬HgY W^0Q;iaJpg[at\w:;Ρ;,TX ˗R"C:D j),!R P $KH :Z  H%5Mb1JK,oy`̌wBZ͋/$9"lb1Ӡ6rƋ  3p&exxv@LW0Af kiG@ʖ’ ^|9S]rgp 1kN-8p;-0Jc4U &JBN ,tЩ%EbȃcPmVa!)Tƅ%XCV$+s 0!h) \FH(Z`PX R[)XP/ mʤ|N`PMN|"!a^܂VŢd2C]4gʼnQd$AuDk hm$qRpk҂@ZٖhvtAwCP0`SA, qRX "!YA3UVׅ&RQ5axf#a`;̺ rJd)UH vR#JAd*JA`Iµ7"`%׋A2boJ6"Lp%\flՠA_`!h~BXR@AO<7B>f+mȶ/3j_ CƓy `Y0{cZeRO׿SMR+Ҳ&mUSU`R62!,:(c;+`9sj6G#%̷<k1 E%eA; 'A<Ɵw[`ZvјK]omKq/PA&'DjYSϻMT瑵0%T4U[АJ'.  -*=``j} AMj F :_@JP@f iYQ`j.GôJ燃Uh<{ ȋ ":t@2GM8ڂ VgGESU_~(}NE@-`9%^VB'a;=}8Xwí}:?>ϫ=ƝCLy!EkS4[#P. {^s@ QMހwK }Y]P9B(5f2HY3vC>j6yЎ2v< tЫ3K:vU 9GŌ:h(D+޻@!Jktjt<[ɾwUj֢k%s+?ىM[7?(BG^Jo!FSb{8: +!Zi^d6A}ʱl6iXXzmqsy{Q}u2 >asp/^;Q<0't@Dhr>骯uTۮ|:;[£=us99x,Q|/NjumcIՋwmJP,Ɯw'KeƫZZgesewhl{}cߡQI%ڽN`KTދP~[W\ɺ$Y)s6IziE3;B4K?Wi@r"T ,lG[+J>Z7wӳm7'QW]U񦫻1}}1?M :i+xOK[y 'UVm9Amyl$MO돿߿˷ï߿_^s^׿|-Οg%0 ݣC1hK˟~i9ڥqSwvcp+~[cVu{B;M@RtYtZnUs'0͖i+rmh)q3*u5.)C<  oW\wr sclH/m&JQ<SgטMT7۞NҖ [%`7/nC c* g4ؼļR'_'Oo~ݤܙ1T]MJ_ +(BD3*4=;]?C y{E(`=۶QetBXun26gV`n*\=S2`5fBP,B-#`ݺh}㴈k;WP& IV>f)j/Vots߀&ll.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.ltT|Ͻns:*UV&5oFp84+\N%;.U:2];mv8}xtC"߸yw]a3m;J6XiQ8 ~c C1jJ>ۯS&= 2 1]+GqmVq읕5 ktMؿIwzzT.q|,^l&g8i>&k&뿍Fs~n/oـXž^`Y@>Y`^32Y8\N.\SQ_D$Mw gMG0fBj^1/岫 H%oT^+y O/cVv[xFh%{vғb+Tl 3FWlϺ+YZma&tmGV]0DYJa$F:r!ip=@]}Wo)"c/s,tc_ok/Vcî,<0~ҲM꜓4fGGn)ߋNcHm㧾P표-_iܬg_nvvzy|:MnحoAy196 RFxSbC4Ι,VSNHVT;djm5Timc3E(e0Lr0Z8CCX}(w7 ߶a t{ߴ # m{v9=g|V!7*{J¸LcR3iV5dR'Oώ>=K=yhGlMUUNƚdIeiGII7jBk W'(<&4qon8pxZW*8kK (g2H0mkx$+|[ЌFhrf`-]]A qw8= /^t[rO&#-yvHmhd"ML 6% 3Uř΢4>VHWWf2jG<ۆX!cUhKDygaŎs^?tNNWCm3*X!ٴY[hu}mpLm$ä ؇ه,6v{Y`1kbDu>O5Wfcыh8G?o{ûɇd *7ξޠngڱJI7%xu? w2:_-y.8|!ǴO\$9givϩ᧞^]\>H3riCFrO`/ 7v/3Upt/2d>z_3{<̦[ݟ׼$8G>N'kiO\vׅX_I],ϟO3N3YDי['R3-s--^nk57Hb? Lt|d9bWh۹1[ 93[v|Ql_f9;۾sV`lGHSND(#$K[ L@(p,yUr``T%5zU$O44kRB!M6w4.^:H*9Ҕ}9.L} AS) P, k`yLCT4*t~>'{E{"y/U֜M3t^VW淓fs122Xc&Ktk Ɏi{deI}lBl/HmhX jˁ'ZԌ>ŧU}yo⃇}H&kr3 ӾAa*;Tv4#B רJ,ϒ}q=soBS R3輗q)i F(ֿ^NBD0_Lѽ_Y{m׊ LtOV\s\˖XE֣ˈ)`Yt%Fsb9|餏M @xX J'+s{f$11zR0H"Mt+pg R#c5sHމRN cc bWt4ZU/^ߔ-j5x<<_[:ZDơAbEr Bf)N>H#2` #=\ JB0jaDYMhU)M]͜O'q\ 5:P9jC ,AZ@U@e ⨖jY06ę&)dbTo$GbȄ 94ŚXP" GQedH59k{RlvSDj؈#""΂"4}i.8F"lBE-"TdP[B60HBEpƔFI"pr3l`RIq Ʉ2p#P9#gNXbY?=ST{#Ү*wT9P<߆/gtX@T]o Ty}(.iQOn)y܇k,=\-_8k}Zk?]zg&-S ȲrOqga*yWCOd$Ч!ц2+(HšyXۏfT!O˴,w׭>6,}6~O{vyZ q~s}3̇.0ͯ!f2{NlňFvz3[z5~<-_.|īWhewo)M޽ɻ7y&ݛ{woM޽ɻ7y&O\0&OvK F-y"j]0F8@'>k|VѶu8k-l˳yZnJ: R2+6|Ϊκr Q7}ݍΟ/_ e5ev tϲ}<=3е>6u"ib-񤉑71&Fțy#obM71&Fޘ2lbM71&Fțy#obM 51&FțyM71&FmbM71y#ob-41&Fțy#~S&i{JApxf/lG;4Ytβ9'=mpveW(9cV|Je{z)lNX=GHSN"3%A500 K^ePA)ySer- Em=Z"g9d <̠"RvE-s:2ܮسXt%4Kse,v%v(i]SkqQ"CX)"Hs D$C4Xl񬺋L@sZsE@Mo i3xTiJhQ͜( )A(5U04w_a10e S \8/j\uԺ1zRמּvɻ&,Dy$3P}UAd5H\cYu,Xb-1- vN< 1EB!wAT^f~`E1qV9iKɧnvVӤW^7͔/Qć?7LT &%e!``kQeMBAٸ@$!tims5 =.ёdKu!{M . n m7'7qu%~' )+czH]Ssr*cg,'bP뚮S-jWH]҆= {3*+<Z,#r@`%wP 1&w XRzʘ 5Ƅ,Y£IK 6'%Bn0dzFE{Rc]QJu,޵6r#18`\20M{ؙ`? >madgL_Ւ%mѶےfXBFe.PRp=KBIǨ޲FΚrV՟߀U:>{QCRT2j"-E+C0xTD\,ZocԂT$6EC@NֹЇU*9 5;nTxp`(sa"iޫZZ~:Nzk;՗s9}%-0$G(ɨd6 $n+fEx,g_$WU끣Th';ȓR.-OklZ1^9c7&@Bjlɤx `IkFh8I|/myDX sS5Vx7/7Z:N@@ "Z9SAPh1ͼ+ўx,*sl+вtkt^O9?<6)&8w{OIU<%C05$WҤ|KGn\-8rBNM j|S(O˺)~YxHb m (. hPB K(<硇F[C֍ZoD4]  XAO^r AըXKۗ6oyn0ٝ{FK}RZ$ECE"Uq MNRߚ:{oUբAf?P2Q)Em[v40D/YF'r0oe@XRРΑJ*мsW_T'/X0PJ!C s$cȾr=m ].(\tSUԂ2xrmI㮽D"RfU+2:rڝШ*)b 'ƃjǹ2D&Meh h@D[y cT팿ښeYywE&D-Oyfp 2g47^bē`Qȸh*9TI%^7J?wu`-Y.gUJU95 IhEB!J %@|D>6NJq(rtb!ZqVS+1&4մ:VBEs#59a{ SMq='_Gښֺ)G &f. {'_݃Ɩ`Mcu35O R5\q2!w>b c1v ߣ-y\9b]%2Z0\-LOhe_i12l""I]}`ϹQɥ샙rR_y@Cq;dMrm8" ;$GghqL~=L/ДgZh9+yDg#1 ar)( 7| Ӻi8f)آNXbo*OԳ3vjzrFɥ7_v9CCSWCs{9n!V8ԧbeLub<e쓟~o[VݲŅ2|6풣Ym;܎;@HCOƞ@{`4sdǿ|w?>~?RO釟]/8sb&ޙBfuw5oko5װFB9zw [rCvoeboOo_~$Cӟ|tkU$ǝx[0['30ٙd`:W<H&WӚd]OYIĎY87INPQG;Gfسx2julSa!Arzu9L{Rp!A)'X900K|\^R|kE (lI7&(AsebI(Pz0b{2غk3;L^*}T2^\z}DU]ք6#WF.WQ9;jW\[Oe0|⼡l%jp!0OVk;lϋJy FC3+Շ%׃  \!\t2Q2};\!@MW8T@v2u2 zzpL%c ;pU+p5 d7_#\>M+)#Br]+V0zpTJ-zzp%9UCp<:W\UVCL"=\BR\LNdB"L f*Yjp|7FyՑT nL՘Ѡ\/W9> K0̿e0bw2@,ߗZ8U)gy}Fί0"\; $Rr] ؐ\Y˦ie%ʶ[HbEeJĔN8S&&z` !]8Ygta$Wh]8SkġHq_.lNy`Ise13F2pYs;lD0=i؆9w3jN6aLW[/zW*7~يlP ZBҶ4NRjgA[ b)%7J\((m4NhdPBeLZ0OxdBOߒ]{ng:Df_G>^LL>^\gJfa%.jmOJ=I_bocN֛|"O\dyJw%|I>)j`i/v.NB~/qr`eKQ{BkV]Lå׫YZ-jRƵ[(dme٭%HkM'6DB-MEk<$с>ipTȬӮ1V2S (|w5?ǥ]ޏR\,|8*.\ݠzb.(V\~nZϣٻ8r$+'$#xk13Xlocv_2*w^b}YRY%%J* tr&Ōd2qْɾbɣJ#Jl{ʍG%F@J"{ui%_yXbmTsY0j->]7o?rw{}]}5R_~-7,c)'sV~vgT硏QN8Q'^/^ʚP^{M.. t_l'BHˠ;ZߝN)d,A h3h9۬ 1VN:mv0JChiV'T8B͟%Gl{)YC_`m)hુH -b*cmTUE  })g *c)QCfJ9GXT u9EHuPM+o.mɔٔ9+'|y܋R;|?~:m9w_mspuغU8-;2Ug>eԶ)bFXrquCXYĐ6& i=k=,[^rU]e';ngޯzxG|0vwwww/|Ȝ]JRXꞎ9agчw;d5=nͰfKm.Rۧ|ubWy+mgn7!n6 'Gj(}>(÷[KOri뀡!ECJB) -ݟنМ2]Uw>!ftQlfyo}W( DF(ȥ!X_S 5AR% LdXh]69V<>$S}Hx EGJThM?U?SIL+-|4*F*dHS,z X˖*PgM*Z޵a.Kh? }ݪ2</g/ b;N40)2Y{þC€Z((9[;Eh3٨@ߘPQF<L3<9ٛȾ S!V`tJUCgM;.<kvkՆjOvg!Uv 6QJգ cP\lC]a'j,PhpQL1GQr!e e}Mp$-"c~ΥSDZ;[D,dwƹ=PWVyn\,y ,:We=@TNdZ7M؎M^Us+ bN["5`&| juZtu1'=c2G5%PBCށ$Y[GD"PiGIp1rud4{͉W;>ޏY7 Xe$Ci5VP \C0JW'#[c̤ZI툈l"ݴv'm|Y<aq`JˠJ *i!ƆmI9l5wN( A67ZcN* 9V;v zh +6dʢ^DCsHM^$:GY'~xa!^}wWG d bb a@Tv ٵ7SN >7fܣ#G9Ga1,0L%do*=T%א*Ǻ&Ҥ]}'mFҰ~=;1x.bX|;7:˗ig,2HCtMSJ@:tjB2(#j_$Uj [#9hR6zI&(SvbE*Vh(YUl joMQ,fO -bޱZYF^O() dZ/c/1AC'~w^MJGcUSrC gȮ 5E8MOUʜ6<|;6'T؜Yi[u&d%hd!T1UCLPaj=m -$k2uhRȺR$JtUgŠ -gR-?h*NK,vХVǡ˫߽yrnwRч.4mz{nz3pi`;3«!~)~\ 濖Yޠ͹}rӄ-D8~ o?~L֚*Cq.|??J`te&Vz^'W8'm㤌 *WdNH֞rea]jz6k;"7?-?݂80 m[KۤMׁ A^ ˟ |ʽ;uo {t?7gw2 r4?ϖ|m'Iۇ1"o|&zV]•?Oy N $5qpql#?erj6Hv zzޛ] }%_cyW<?D"M瘢T z@&hmҞ+tEpZ l0{$2F1C!Б>Qq 9jRa޽YDeKpj49IkEbH.a+Tkŀ˾ 9${s]. tqhԻ0/a i-TjPʏAc"ZkE#JV}H$ $V}'T۠(ev|vq1)뚲6,hRm5jC mh[Yό,7ov-ڍ',Ta޵q$ۿB lH/uI{N_)HHV.ÇP9Hk X8隞ӧ^DWisMMR.>,GgCcJ a[cDV ugH&KZ{7rp)׮i+AgL@X_q5@\)+"'i}תRNM-{>F.o̸7w\WꥮLk~#2yxC3Ye! !ggs88!H0x`R)&d0>k.Ѵ7GHⴔ8RлLֽi+Bu^tervW Z-ܗ0'ӱzlHՆ] =u{hIn =5uÛͲ'Ci81FbG|}U'^kóB_?l%?FrL{ehC{覨qMӛ˽??%u￾?~ǟߕ8/|ûnj{h+$Dfk-Oo[kZ߼kapigFKX#7{|tvha1$ćo_~P]Ә՛ez֜nۧ{톱_!Ϧ{Cq/nQKRT,bClFօߕGb(|tMiVw;8q=m5s* ,#VN4&-ey)MV6v*xk:d+jszzC_&^?M~9rkWҝSGϧi؛P2E~9!z7M\c}T9w]{b҅yˍhWsr9w&lYR__?Jvzuz+瘮9BTUȉ4lHk{!%`O>)6܀ƅ=^t1BeUu:\)_s|ϗ_ڠcpvdD̓p3Dejf!A9$ "m26 RB K5>ƤnH?>}-fB);Bry!Gf<)j/Űr}Sr&*$VN\[-f=U2'rRsNIB3~٥ B<0(zr+ #[} EtF,aZD3eH\ēWldͭqI謾5O;dޤ 6.h82P"Em{v}԰#l/ŀgނ.TR`øKNI t{(@R<&v&2a:x`-G8'rKi}ExO>OYPvdڦo`h_{MˁD+$MK;56Os8`E ~qۆzXi%o+{ vI@up J+&9#*+7pU*ZpER*`\F2%e >nnATC;]Ze/cG z.FiH938;b%ҷ7~nAkӢnb(a]W/; ?9ܹNƇu)Y]+)*D XM.siFst73g&%zXtAj]o~NRUm)43z{mNolԒP@Xcrtrs6 =$0lo8F-QŝEJ; 9UkGpE {WE\WEZw[jz(ziz >\Mvpѫ'rɷ3DQʟ_?⵲ *%0WYEwhBHMo)|{d-k 4ZN@^ibT b l.!D#8z=\fcwT5&OvnUp<8AgLqIUrx āgh-NiM'6ʩcJs,~3M5ڈif, f9&fc8hd>@Cgml*IEwZSMOl|܄xq'3$J~! e+d!j0WtC[-h 2VծXGku`ցqr\&woDM"=g~[l6f~D6H貼qYl¥4`KO.dtqxJO{6TRLr )B;se1"ȊfWyL9"/y3z Gnjı2b05Y;4c(xfG8W1d2zrUY ,9B"3A^@ @A)nAļsx > 6s,R3X(0d8 2kM5Jb *jEI[>s)i}̼Yn-3o#v=89vؑ0rPWK9ş-% ʀ"rR ř#G9zYr].{Y"0d&V PS ^LvD u"ƪbfo:/)U,{*I>s*- HU2V,[,G2Gd:+!2s4L%a#6rD!%BH K˪TƬeg2DiZimؚ8sDBB[4Ųb{\Lͼ>ݟ\PN+ Rͻ䣣/D1DCmQqǓ) ~:-L 9%b匊Syl,PuO3m`xF#DA'@bLi IVy͎ȈgddUf km#9$@FGKqn7w CD&$e["=HJCRPه,s5=]էOy[i#1H:ƙCEeBC)U,$m@x 5"ZT(Q2r6Qw_},'TPd t2ʺ;1ZM0ZGޠ @aTn(O5ŅJ ?ggS-M(#@3=%) D<\+<3Z$P"uH.p0!vd/1d6[q!)B*pP Szsy $GD|0RƪbӈdTs KNs f6Wϫ!J*xK<.85g2g9'T3cQB9a'}"N?YF'9O'qdZ$ZqzBDxb'dgyqYERa1:XĄuhx=!G GMǧ7yG?I}=NNw=&`h *g[yWZ~+/|w5?x n n'u8NE5܌Rlo/x18~ĉj =}j놵wIk7˓=*bIip(|2]7=si8p}kmu=;d;x~oߺ8bm-~7g>nhG~jV1^q+(6joh8T] BiB4bpYh^Tl+D}U4qtwwQ}2*&)S4&Qo6?@ v{61 2_@ȅW5X?\ /(Ĥ#!R9K,P0NY& J"8fHR Sё>aq#SFQ ׌DVRJZ[g1*Y~MlN|#|يzܨk3C}zaC3#d.2T}T<.p SJ)B*l3Eb9̩5X{ˆT]F޹E!iE1=&]Gz\NE J#`Rd ܙ!ND$8*7 Z}xVP.H[]ʻ8tE./_pM 'qKۇ]_^}n>8|`,۹coϸ@# m/4-YפK6!INn3_/Vwr3.BNzِؖvX{w g fymŽp\h,.:uA Ȃi :?āqcpx*WԔ*x 2*+uPsk%LE Iy l`Th"m:aZBD@-'-3'f_r{aF@ 'fbCZ˩;=LC7J[@0R@a%ə">/3gG%|g%b烆&P9)2\s1eR &N &Ј2#E;k5( %lfv&Ύ Sg=D 51mm]~Ma^Φ4wgCqE-&k *Ѩ) XdISn$3"Xġ#MxŌ k}gZOGBt2&Խ@Qמm񿇐.xV[b ̿q3.%+ t93['F"傅@$D-t7]=ٖcphj ]->PRvjS\k tyoHAz\jl{BDL&xCN$DUK%."mYQ4NScp@ :g(@,6'QY-<ov&ΎmΝ=^YU¤^#~ޞ[Κ֒r}nIǬRLȭu{fX/[n]»zr9/!ݚ&FѪYf =磞Om(nUsjany+Av.v޺M.R;/<Ɇ5W5~9?-t?MKY{>j.fM]k욧ݽk4{>ik8ͤS_z@Ⲫq^浕=˃yV+du$| ʟo}ͫwv*P ЦM%&rN\jp|I~mBvL^.z4nNJ8 ƛ/WSϵ[AvuE{.ᵻ4n-bxA^ ˟;^rWچpΒA(>pa25TpKud}yNʛ@c5%r$IhNל8"\rAzbN r #0̔"Ηf(~P4& 6>+vUzJQ 4SdNQ#iM`SKت@%ztLX*c?.dE@,Q;p~}#ױ4|@G~е;pB[w'Vޮ w/{>2 uy)y?.ht*U@0^BrhG\_>~SG\9$gl7y&!PثE]&v; \uk fzpEEW΄ÿOO.]yZL㞿;suy;X4ZR%r?m} Vnݫy7ՙ+'_ϞnPwn^ԚbEeNC\:q޳Ju?>GB DV9gB5Fؼ!JfߔUU{'r?$_ٜG )P ke{75;\[.wѴ.:jNR'B/ݯ~R3Oi=xnägOzٛ/jEWl|v7?IϼwW_3p;gnޑgn?i{wˋo_-Q㛻Зv嚷{o3?%e*p"ssW(|Mw1ӊ KnrsSF4/slW4]5>Bx8^fyw~Vg(l#ަ_88@25(dE3iZC@.DGMmkl-8a Tѝ7G iL꽂)얜SїpSR5ݸ_VI8'qN9^<@)>C\GYb&'V@ҷ:Ƃ9q krruPh:Z8n9;NL}OA)R \d0lB`PL AiY)TtT3fQǔM;[]Po;z,{J5碡jʧcD榇aRjUI͋@eqpo7GC1B_BS-AX5]-JJHZ2 zoW&{5b Oе}V04Ot`t @̡t+ޢ@W/~K>r}6^PFnS)H6(S,cD7s3TsU<R5.4H!;G;ߒ9lBN 'A*N:t$.i~8rE&ZSckbE3YÚ8xw9;&ɗ9iwy?pF:j闥feCO~R)}psΝ*˹j8.g> ,lIZZ9u0Poxe*`&rbC֌]Όַ9P!ƤF_rqqjD}f.@ҠjŌ9̜ usVώ6溷Muo7py"u&ºqϼgmSˤ R<rVJNS̤ % J7P+'m&WD ³oۚ8}3N_@af.0sR_.Ff(YE{6T=ʹ$͸9m![Pt&l_K5n&Wl5nȁf!{D4s9CyӥzCh&;Om&k6{knFk$5G>,(}:hTqHDZΦK6fi`BO-\ؓ&­a*H Rq9;'R^ yƾX7ֶ3>+Eg =l͇;,Y~7'LA#0%'T)eNȘӅN2g_|8U4cwCC} =K 跂 JPEƊ1R)ă{9;_˝`(a0jÌ3ݸ(@o!gQV(ZCkȒq  [ YE͢ TRjJpq J@ҼI%alERD\Jx(8?ED8#⌈wiޞφ(E; =g<eESh읫 c1P#[c#YqF0P%\J[쏃DK4~խ!u}q㢟qqōLjL6V\3^#FӚze>bzSw)gR 3.>. Oa{< TI=e#T7CR9$Xoh2HG@W'6U& &emb*ڇ,R&k:/Ab2Zn3FiI;f!%2326Z$jO Pc`eR*Fk){̘UhaPʤzH/~⋎/M-_zG B/%U.}1^>|>(>NdWP&FcYELQ[!qԩKϑ;,r'e5s Xr\Qg|Q1*Sg$4<dUW<E<ƵRq6̜lWapԋU?>AiBe2^f FRDٺ2G}3Fٜ'y w>iX)In$t(q .@(ևB.*@/'ӷG18OSR;'Z\Z_.΀wa+1 6D $CL1;c['bňXh7#}/B" z]8p52Do_-lW'08oP9fo-sx ;}+Z#۸r +pQ3;Ww/(OL!kEɹq\Ԕs+>UƩxĶ.+8;$xIhwH֔=eL /c߼R~[wmmy`x 0LvgfAPBdkI翟bKKKLrܙ/jYf*n_Hc 5d/ ePɩ1613RPe}fU=:Vn_eN)fˑZ  PuZzSHmRXR* Qg,+Ke:C*ZiF&smXM,tsBZ^6c#z|-s/>j=|"yFUϨ=O"W/Rܿ*̲_"s}9V5ٳelAL1H|zbbww*g/-qyzF_8AE4,@%C̵1rde~ʌEnK'#Wnә{!UD# +L%,E-sm~Zl6{ s3<;>Y+sq>uwQ9g=/'g]K'{ bz{ͷīq`WǫKE+JIJ!%FZyVF!=dE]Gܭx:JsƱefŤx"Mcp&bJ\ZL:X`2*<(d6$ Ǹ0ЉfӵP",J04p u7Wb%"rfvo5tȒ ~ȋ4*0=~|Sb0^ ,L[~!pux5gĿe%gK&N!Dpi:錌Afqmjhfٺ(^kdzZΤ|iHK)+>r)ٯ<(&-vn TӜf Oh:>|z_ޗ~ׯݧUO4ha8{7ۻ߷ZV[v--lѵ5z1[b+N=zlWx|NF(8eX.VQ0I$gt !Ў:i7S'2.;vOi]~oOD"C'>Z)Y*˸YL\ԟ+A56tjVР&7Ϲf=0=_K%|g%<@#,: f8KRpd9z҃)'b RGm(e )Kn}Le4N #*HvL$ +%xVƿ2ch]~~=3.ۭĝ|rM1òM 0>y47gCIQJQhc!:K.K (uQ Kgj=$τ-ƒX2Ӹ7 ax൵:Tp7#,!R8B YI#vV M{V>}|,MGRt/SBh;.]._bmuZ:$͢)l .pòiNaI¶*1%,53l\2"h /%대LNgG7=<ߛQheMs"( QO!|N\(-$ľ dS:QײzxĖk}6@ktVrf3HXē\#뵾!)}WEn?j6 u'PT۳R2w^aċ/ݼ]hQ{q,Vl؉Ƚa&v" Vsu蚣Tt<ӟz\NKQv{6h, ɾ?%Wbb8Rh]Fc|p.ғ`DÕN,"Y`"OrƝ ߨ}tR'/h'A'.h7q=mAIkjJ>w(hzΙ1a ^n(5RP@EŦdH.>W؟*;2JeJ[91$2\ZR 0gɄJX-%.\,H~P9yp`1%kRq Zh(x(E5q6G)|'0i!>KqKLwD#:-T+ܦ@p4Y\,UUW^xYٛd4pMI2⡓uyJRKQ ]ޠhu4ξsI$"X@m5ӠrVI&ʐ |PG`RVfa(F$*E1RkF][o9+B^vضy)`grYl#d{غز-Ki[t%VUůW)c֡'kY4vZa8HB̝Ac[-+{my{1:J`A')AvLpA1h#%93fRPL0TZ*m| Vq!R/@fC&eFM|]!J,RPMZ{O8aϙcXM?P#B;xY @ICV.8$ +mP6PF.2ig;ԆD6ϐE[fknd1b{lXd`5q{ċLC/YMK_T_H n;ƃ.@BcI Rw\R b֩w'ҎmaeD#5S2v\"_rdF1z\e?>S#i ճVS1T@K9 &XS:p%664tZwVGNF2 w@&Š`9!W(-{@`d9&5Ia,SsWL8ˈ sHNȒڋ щ|.1&* 4#Ib/vam՗vʤut^,rorɴ@fbLP%6EJij_QA*Yn5mՌ۳6 ibېQYGTyp2In!" ctO]YDRh8)r&K!8&uflց܁wUgC;k7`f*9(j)ú#\] 2L )z .:c悑,1)z+YApPAg5M|_?@gz?l$.o !"ƙth4W1&#.Bh;U jI8 t,ES@=h$\ PT~ȋȜE>GDZSӪVۆ vJۆ۳E_&Ӵ5@49K5pHm 16qԳ\,)z+"$(7Z' ΜQ"l|lM8)CdFHZI%wJa1)r_ṭy,Nf!ZT#.}u2=Ҵ4O/%e%؛Nig#_ިz ^w9<-D-Qo0 Y}9& Q<|9肞yi02:F%X*uUE s;^U)pCNXTVlbHm9A7ky+e<2ɼm]>^R(5PP6 T,6 6Lb퐶8CZ͒ޥnܖݸ{6Mr:PЃT:fxYP6"giC')ڌzejթ{&GH|XnLsǪ^/vgB(l`sHUQG$MWK@kc*Kܖ:8yH\EN#B]HAYa*)rkWg_!tɷ(899ksݵga\慒豠σV|M2Bsg«P] %$#F%FZxYE+@|g; Ww6hՙM:smZ}l͍Un~.c{{u{e~]{BԊ-񀫻ē >nek\r`[ q);HrxOBroitoxYf?_w޷7ޞMf]g)\@Wjq?`hڷ]"Rgősy[ђdfeKndU3bu3L;,S [E=XzAr6O'\ݍ%W]-g\h乍Q,_}6&"xo8K=_I'1v/:鷟>|ߗ||f?QLrao>X_"jyӲZ޼iiaֳߠk+h.m㸘[[SwX]iݟt5kNrKd'tq/ξꇛRuIj]% X.}ٍ**%{ce4qGtww+P⛓]Rt8 {_Z|qF6?X1g ?[(X*D|T8񗕋#?>i8<I?qLF .(QtRXc9WGy|,hVwoq{ 9kנELh-HvΠX6lBx5FƷيzܨ7^Z݁|ɷR\h`~s+OK̍U&zY֨BMP5(T.)Qy'g^ܢMZQ=KU}eJOKRU-k0L)&OH[SY eiD%ǸVQ$ 1k_dz24wEq(_Y?^>^;zGyЛse9!z˟Nn}}Tjcb;}|ȪzG7OnV>8\eb@m>W)VY7K Œ imِ6;LPOKSNߧ~!x:QӍV6YhyH?,pCpz*WԔ&E:-3@P&aZOBɊ\$-F^h6, }fLy( 1W_0<ݓ`} X ͵9)j;F0T=B77k@ֹqZA58bXTJe5\xV#<0zbH}Y{@P2cf*(ubܦॕ2 LZQrR(q9QgDa$/xo5q6oi&~NJ>~f)?wB?/״zͰ~/ZOjPp/% $br]s$}ԢE1XV &B#]q: !hku*}NFCS1rxDGXBpPI#vX a+P릌q B ẙ*-d1!x}݊];fVeDt54%c^En )/IVeƵdyM /L<dqKd7;dz&Gs=+]Oy5ϣ(ʚJLQ6`e(W1LkEWnMb䍖L'C*> mc4lҐ2vY("`7l18Lk]G>i:1 e:|\&h`.|uv9p͇c p=98WQ|EC8\ۮxAQ} 1i;_riB\57Z K|i HrW|}l!|[Ox@dPP  DѵkZ/ Ť bQ@R'6#kl @]afLJ,) 2*.A{3$ccM!BPɇdn:cy;Ufsc:@o?v*p:^ƴj#Gt}ioܿW|gʚ2Z4q\_y^hC dP4;C$yE :Gr> +")ywYH/\"B&98.eB/PEϪ|eB)OGWhi].~)0J Rrg}芫, jsK3NoNg}u7o s>_y ٪Lo'yyZگ (<)~q (%w%*l\.s*6Nӿ́ ! k_3"mU|) }V-1|?g/e/+//m[/oCO׸},ȶC-ΞN.33v2 :'6{'Ww SmӮ|ˋmw *`i#|lPUb@p`pbR8QS㮖\8Tl'YP=u2lbsZYP6]Uee|q^L٦,$(t9R}N"f E$9E/;4f4gZA3 ؞O~3~1~ۜ20#rL_m--.=>cďtmkV8_f7\z or+Ș$ZfeUW ?;'@,xu00b;ʷ]Օ nYaruz0~9uYkeos.|puDsjSiez?5mԵmJڻrR/eYqsP=eϭQ V~1A*Rȃ,؃UJ{ ڢa$]D?~dzs+}8}E .Ycݧ?#<7Ur%9{2)\}_5(&_y/tru(ᡞLF/cE7W㶮2/>Sb 򧿏L>\V>'UnRVNmJzwzϰ\@Pw^{ߑkkk[$"qDю:QE;DNDю#-UzA(|1(RQ,--%hi0t z0 Fi)@kkkkkkkk?V"RoA0j]@q<׶v|{vO;}u/e9;j$8\O>ߝ+-0W{_Tb("XsQBʾdB0E3'%%a p_xoůx?[lgf߿WێNGwd ?Rs;}tK.m<S$6w5 ^ųLJ๎@c(R;uC*f\?~F8&'+xEJtT%&@J5ǓUMuE&9KZjȋJR@ u t"A@x_NF:.)w֫qb ӻWc1:32maAT?k.Ud@R^x1)Aulc>3=ЕV\ԷYJzj;^A{ zޥlwಣ2ĈvVQR6 :dYgq|7,qYZ 샯tMIy i҂ xk:Oh#;&{cTXcͻ٘{(T.qږȱNPtRDR2zvr,N >I(Ȕ:)E3m|I]LvR5j02.hg*)ŦQX™ u!vm_){2f $VD(M)X+(6VflVQ& Kqmg䆧h~yXxTFTUƇm7&iWۋԨVy2)Exv騻09Q1fq b࣎@v@ ȍ6SdȖJ4Q*J)}'*HFel&fX-l63vJm^m჊Q7HܲnQ_l/.>]Oo^6^DB@ !#j YgC]e;QL(jV kΞO UA5h0L\};AV?ٔ[fl2-G`[͎]jlU̓Ae E0I)8Gu쭖X,6P6!- VmDJd/:l֤,  GQ|TtjV1flSHX,bEEqm.;cT弰+KչQjm8O_.V~VHztD+3Z(=!)EڈN-eyx]siggTxaSȎ(xUѕLT8y'2 c瞶ؖE 'r[dM*N(#M(PΖ lS8[٨Wzj6Nԫ&*>ڲdk2 m )z .:k rtdլңA@KxL#JYҀOETKXplETP^dn8i>Vsb'}7F4ݻ,Cxe,X68ѹgO,JB Y \"ik[v?7"*j|m8ٕdpZӂM\[R`dsN=QUi6Cvl#nNj8!ÂCoM28Y:0d#4',>}Mb]~{S0R:#yQWʛF3[N~s55cى<ˍGy`R)gY)h±#Ⱦޘ(,ٻ6$UgG:g[CE2|X|_H8(,[tuwUsBnzEl Cz /)Oc.*5X@_ض+Eq+ i2R97EqG[TV36(.~_^5+n_%W9IyRK "I"` O *Љ/?|ƌ?L#LE>9\ FW ~7?NGa3ry}ͨjԾL jc\ DKfeXY-3É`Sb~L~ moli6&g7Iht5-= $ϦA5l2 Pi:ٻDž%+_ީ/ U?+tʷd޽'SP0,ˠ`v0n̮ьh (zBܚ1iiۂ-"g`[.[JL.0"&{[no=-'M&Z.4S <Cp[&Us$0ZWYXXHDcK)ܤH,,xG,#-rs-:q} 6\J:3i > +geoȌtţo#_# &1$ `7lTq$<X G1T"_>&w.@3U)?nIb;tWIտ 6ߝLS],abg[& n0YwsFZ3pTͨje\29O&ju7{ϒ2h)o\HgnU+;8hCw{c@ A&T91ER I4(,,\!';vIs͖{-P|gK9A=ʐU@$9,;ܼ}".y]˷V t}'%8C\D$ _2\txrYDE=!N smﶓ0˷Ho; <*ww<ÛOx> naf#[ 7V+Tx7!`LN1Dڢ* Sܵ)dbL4;ֵ\M ĉ"BL1jveZ&o8z\wx_po^^7'/_8Dǫo^®7I6&MXo@wM[5Wm5]- r;0hnD}CwRU}ZC Vq7>Wl󋲞\wU?RrP! 0À@_u:[Ll[me4qs)N,U̖T>7NҤ >4}%`3wHsr'5Y?2㳟?6"a%FBZ C0L[#꠩g)x4虒22hި{lHBԿ좇dǣI-aLDE6rςJ δF0T>jXeŨFh-?jjZEbt}69x xcʠS po4VR#ʷVx ET~sS*&ǸO8m!78Ŝ-:2koT9PufޱEj>o.ꐶ/ f`k#9oB!G2qV{tH'h,}#XTބ3fAbYiZކ7H.B?@E^žCAl0ɅnMK_o߀i7yA9~iXAA9k^}}. dv"gq d;VӶXv~iJJ7V^9'ƹg>?ky sZD)3lnɋӊEq&fwF6YU24NzgQcF12IQL\˝4T-WvKЭj]@Ejҭ.|=ܗh[Mwz,͛< IAY͠O$MI$# KpO+>%b+;XAyi059>x1Sх~F+Np{(lF.Ï8\nb EA@(Kd68Ѐ93NqWʗw' k^iVtʷd޽GZHfv5X`0n̮ьhB+DsrW{nz_J/gMiʻiB]] l"rlkX8Mp ɨ+—JGݱ%]o[XG5M]U(bt.r8.d kai?ol<~Dh7\ 1k_#=NI%-tW <33tf_a8ࢺ7'b͙pxH-L(ɫ.Ղp.2;v __FIԃӿ1ɡ)F:H"*0JCƊq3?H\]G_BHW[ǽ _s; ot$'IN,@7rL"Ku&&R2HRqa!H0<1"`)h>$:NVܣb_NM;dYK Ig3;T([ڋ?O`;SWw{, *@Y> {zu\/(]Ja` )9Dbh"wC:\Sr}- P;kHE>fSi>ٳX?L1ےcZ uVK \AUw_{Y7zO+<sWeARelW6F( j`: ߃\PG,-|',%^g/̜xxOtR FcXOm DJ%LԥT?dg9d=vo=׏u-?y'Ap])q>hH dJ"g:܁1E"B\BupkN !ÿ1q]6  ʌdqjQ"=&b 2qǕU( (dd}acp̺q?׷W-=8Dy8~s&4WrNmV!>sKF@ *".j\Zː|E?+ PeS2K0D ^C\q%MbEbtq q*&h`PLqr&ehw*GQ^Pp֗/&f`t@ylx6PO.g!Vdd־Z"<XsS3lrtSEHF\E$Tr6jFќD @gXBO9? =+"sCэZ'F_@s_WܳH~2nxh6uBۆpLxsFܬS=kZs_u3d!J_տտb)MnPۜ$7k@$hmڠhR$$e^"~l`5Yhs #ur)\DB-Mҍ)w\+ZWkmv6'|ԳW(Sg/U]0MyٕO^g<{o%R3/\ӳ5-nNkaRؤgM&InP\tkxUsq6O|h&4;N3!Gˑ[͟(6%w՚~+7]OoԀN&CcX:v_d 9Rc]Nd,Y) ^: 껯h(J_PحJt@1jk:|x~?_hǣH:kgdm38BR# #?ZP^շm6ȓ*mYwǽ &X:$>oz#o8v/e5弻r@-UF~ՕNТsQ\~ƣ{7r_1'cnI(r}Kt-b@!B쵆=r{šV~TUBv_h;6?eoGWg`pƈszy. JG ѯi CWF?-}k0^!f?h^5=_K&x ^)$mm-,m+C CDés f=mO/NƓYO6;i ͌7 @Ҷ1NF꜎unvew-a-2%E2*15 MΈ >2DSB[uW@M؎{ 2/ﳌ̲usexDR$$J8jbRF (#m/h|Iso!Z橏Hk T:RD†ZL͆OYg9YNz@ӓ{@/{%Oxtoc Qy=7iw!VAGB+psY gQ( <e;KR9!qz^ q ɠSbTrKi[A*٢R iƮXbᮼ (}~=$-gW3w{zzt)`)QNG!P(xHE0 wZP=C E{bMKG(<`]H"A5k|bl6_pe` "ӏ] #"TDNPv#2 6 Jh^eLZ0Oxdh4%D*8 $(hs) eTOmucUٌ_>D|qqh6bZ+.¸(*.V\\ HSD cPjHJQH.`9 4e֋ KiǮxXgVA|V%'GY"o]>TNT 1:Ϧ9qw/=LssC#^$|c_~~8i͞a()ߤ.Z$(YEX' -s6/gRX:rT@ dZJh `9pmaz/ lgr>fr,fߡS2~n'>Txz}'cC=xaS3;Rü䭦Rȵփ qm$ aRgvȑ~:d\x1OӃ>- AʂeI#ɷޟ`JmT*QRNѶd 2xBZ \&v0rސN IAFHy=ӥMظm9y*(f<kk_O(8kϻx֋MS{<@{01xWu73eJ+H8>U(s6Q ݥ>N6pCƥґgуQd;DjYC)\MD2ESS5ɹb&SJq.ֱlN b(K&VcmWT3AP DKytQl^|١zTf68Uz[NWo T>oʟzqfGĎ;p~p96 1t$@tևގ^E諒iJ:iUZO闖|Si_޹=fO`rԓ4/yғn_ϧݹ\+V+XJY4VjѬyQAeɉmI=WXK[xJ_{ŭRo5U$Lsp׏kx(+EGy(o#863!b(E9r&CGA^xux6&T;4 &g ܗXtNA72{ R Xi ۬8ĭZ\K6n=3sw xq~sJX$]}SggO~:bV,U55lhS5 '%uYqCw# F˧Y,ͬ)eXWkI7yVL[4yw%MϏL~҄O"SF߈F%Uj9Yoˇ>s^?,k٘\ ([J&9bz'Jono]q-w@"Qofs{u#lc4Jis6#3n=}лlo"޲x6˧f^&O* [T{1Sg.@\/l=ATb]:E)9\_k-|6^=aCU,_a_\'SV]~*c'xwmWNJSΦ1T R\Ak5樘LK^AŘX痣˗pFG5+Wo6{o \5@/~OI/?M> ^Ano;ED0嬯%;z\V\XEbnZ\s@z@nb*Z-kZ-~13?X/ W[ J`7~zA=eB/BJnSF8%,x)3?S)0Do%$Uh[9)Sj 0 Qkc+iQ&zȮ$.>f(p65Ve)NUf;&'LJ5%e6T$ 1s2"!pR9eL6ʤ},Get PNdSQ#DcB F:hJEGa_).E#bgĀ֕&׸C*⊉YS#(\%5SxG(ԛ_P0 tC]ٚr"p H01h6뭇h%rb|8t2shIܨ6!KM G&ZؚVt!N^1fΆ4'%Fǟ)=ԙǥ?a^QțEw)y@BijBׄ2-oJʟ4; VO(;nQDKblą.:JP ԻW,XzXb_kU Y6*,*UN9GKT-4#}Ħ\Sztt9&BqS07-$MBX4UtXRiL Nk-dr&i %XeJ. cš6OR[*1.dM(\2gFJKio {tlkV6X::?rr|J:~3|#*^ZԟCgk/>\,]Ss,wMn~_ÃzbvփS%aQTkT!΅2c1`#} =K6(;Q|΄؀zZF)2nGr-;;Em8`o=C+H&;A$n.9jl BX9T% =JO%rײ'͢ VX#BqJBXLѤZu0sERjӁqWql4iFo= z{* [T0dxNYQ kjX V 1Hsʄ66-\8VBpc B<ʜ z:YqlZ.umq1 Eqq[@jXDfX#FӚz J}WPw)  pkɌ;GyǶx{?amk49 eUwՏT|$Uߋj7"ڭ ;/ڭ8KgkoкM%ǩ۾w_mwp{"W4M}:8hOpxEܻ 2>uY`d[ T]IN_zjq!+ :2uCW`ͭ;X$-Tc֬Dwy)+߽sJ/ƨڌX% Z%'PgT D6k{{wkM뵎~Kw?xً'8S`o_/6J3T`jlVsdGs)U0yO}^zuѼ.O? -U)Hä!\] {C =ړnIT;2cg'[ݪNr3w/ͼ oF,Z1^9c7&@Bjjɤx `IkFh8I|omvDPIs*>IZLEj|uaqEh8#O:hTS|Lkjnڽy3۪4MГi^X|MI%x*7@"t }#0yx,J%_➔T✴"Xk"j\ l$"g}17kf,ё;ܥQ6Y"Nh 4EBDpHeޟ$֐hsvTbD!$Ft& Cq\ІHowDu*4UnF# dFeGdx@U{s[*NSOEeI Azn3py$2A3⩣FksicJ*SiXL1"z&5;:~hI W|ׄ|[\MB]VjBࣇ1lMh<(Q(} h dm@yO..ک7yߠ5>ZC\6l{$٫/].sw<~x;\I$rVf:Sӏ}FͪTZuI ͞UcϪK۹;a6MĉY i U (D.8P?-OGR@զDVoƹd^9$p>u.m7+3qܸ&[l8,8{w%c;okL m'<DVN)7*)6J :IS/,>H%BU0ՕsPf#‰F˹2D&eh UދfoSfx,MOpU'ü F3M mhn419'B"$Ηt?XCP3gC`R'jNsh2i7TED%J@Ny+=6 ͷYXZdVcMҴ: @Ck bz<*ݫt{miCrD`h"ѐ_8b\=8"$%QP6`gFk$7tD(ŞD(L5Sg~6-ϿGZq!d)BthNn0#{yG$&1IDOR}{`Q6gyfZsy˥=%Јx@Uq d2Muإ@ޜ" =&ȉ8~N Ow3tGqh-IqzBDxZNqB~~x?kQ!T_/c(~jv>6d>K8ϻ1?{v;>R"T\\yׯ Д'M'VDtPPscyl~oj.x2;b]&':S#y0Ja;ܜFדYSš-&wGρ=qVꞎVuVwӞ󽲛eeyh(8 ?>+;r5oKU,ШP+(6Q6V~~F\ G C q7 j_U^sYǯH  ^lNpzinaϯ@z1}?k+$~"^23o%&s YbqB'7XBp!J%vDwG:5C9[;y9b=e2q8qIi%)uJ!|uV H_By UˋW.| ΡVW`>⭌gW*d +﫝ߣO*~Akvwp䕝=I8x4n&ayӇY+oLvy6R˥Sd3F2I6T.zc4eqv" /"zAm U:oz>nhuv9NbZޑAX[I>zq=޴5~,U{g+ OAuWr>u2M[ ;^y]eN3k*l{F  {jcull!j5țvW~oq0vZ D |%ZXZG $ܵԉ <0bPdC\gn RY.&ѝEY`QE*JF J`R54rg68CSX-oJ)\]E1l;fQqʥOqTͰ]FЭ0ݪ:mw/2U}6FFO,ٗ5v|UzwbS^}~h~}}nfXB|>w93P%B۪ Ӷ ngL/q{+ 28emv'5 G JXF8^%v%'L6u]Nw-0^oAFeNyn-< !)kƘLE.iK H6^hݱ,!];zy|l"S,R#z@N ?=863:ɨarX{*m QHHsrͬC=Yz~B'6PE1mACJl `(QK@RpsY9-0ǃ@$SBe80tHfZ#Br93ƿY3kǕhYE}  -]wak--\yvyS: g\Q9Z `4;χeD^\JY QЄmsGuyƽc#8'!qW<".ʦdK0QC\q%MbEboMnﬨP GqH:o-_kT\֓u 0 fvqnOG,9s[-%[,q+ic2YdWEփpLӉ^R[wOSbpp:v=Ϧ5?ΑtGg!1#gb쌁U "9i8Թ(}7c1Zfj 톞6 A*Ad$5km:@N?Y:a>^qŇkw槗w?r󹘶8oxbgahW.0=o0Gez6(?IФyk:8Ҧf;5>p=;{vivEC巭!!C,K8fsjc s@XD bϵ;YxWG¬;j4gʘ쭷|W<4gFy^ٷ9nG]]]V`cP..4d 71xox|/C*W6d|‡2|NSo !Aj1aŸꝫNBT^kfE4 Êyw#3ޫ::"ur)\@DB-M<;2wƄ@ʃ F Y]UYB9 ǝFFJ{HJʐ "./0_Gq'AD*ιYR-ngQTMFZWzuhl׃-?dz]ͶGZ !őUiS8E k[EIAW97=*v;];Z7b/u(Z^ "FA*gK(#  QU&c*z-b1J&ijb (w[ %P$S69kE'cl1r@#m}an?^]ތWom:㎹t1#Vխft‡w=Z}duA"QҖ񁙺f ' !ǧ(^9ղuA7c0 \sNglX6 c|9wDEieTgњHug[JOHM#t&ttuT{TӚ)z1zԯ}0_֗ENo^O.򞥦c`σ$d=|^?oϋ AFDqu6va GuYqͥP ( @ƞGKn^{~T4g^\޾qD`h"ѐbD}LO0& :X$Tkc)r`mN!h B2uG!B0KfNH]e5y* Uu]]e*Y+81R3iIb^W]=ڗ޼vQMQR<M2 P<! ZSȝR-mGt/$BT3y9 l(ѬL1zgѺ4GwE *a89- Z"= 3Kq&  ^<-RpCHc;t9Y-W~b2a\+!"RT2jvwh)Zq$ hh1jA*GHЋ@IgӯB{dkuY{ QIpnTxp`(sa"i/ZŠ_{;Rs%-0$GhdT2P7 3Ex,bhGJ㉣Jv`uς!۴VLW k @2)8XCҚ$l2yK[z\_+WE>^XVz8&%qu$~{^ _y}+,QR=VlG1_ bEqYk\|bvDV7T$Y+y{*'M!e+ Lʺ3)MƼ}F 7$ƽ{|-> 'OnwƪUN]R L'eF#F5\'b{%G9&*xE@Feդ\h&"Hp;m<8r. pwA!J"g?.^mpkI&D-Oyf0 .YʜxibrBOE)"v" qo:~P!Y-!J |Rhʤ  %"Z(2rJ'{< eV^X[zPJEfZiZG|WBEeFj*sH SMq= *=Azn^ 1ÍdLs}6[MuBSl1Ё)|OD.b0\W)"j9s̍0̯?3_˟7\VO곧nM/*"x72&<7h )(+rv6ܟ<^|J,M<[%7ZOHt6#P(&Rkr-.ȿobg.Ų|^_1aSoWV$*ʾ]֎;?;o>hI*p1Zᐫ&`=Yt{BZPsjZ1#Z|^/>4g5~MxtaܭmYm'?}-c$NΑ@<A3iygCx\C a%][ƽ+>(qKv{k)@1psnM TK]:Ȏ 4_AQ/*o]Tc*X_Rh!g mGZ&n8Jtwv8_mlNpy);ʰylї'P^Sbmi=E.*Rϔl.1H aT :y ] )V*#w>H7[xh)kƉK"@ N+)Ha%S [T@b %Qv c=ڤs=6`oux+ÌzFޑ`.z*\ gjzG*ҾXp(6X`AyQQR@D<]TMRpN8S$&}RC6DJleDAv0NQTU0:KJJMd:ȝ "tJT@BU+Bo+ru[i]]cDzMA;ǁ.x0v٢j=c: V\>gwxaejǎq;O8m+|д|ھ'|g?XyX,h!3?/Vwt.XzɡXe &RlhH;gzϔQvWmŽp]ⴀX\t`%,x# 5/W8ptb׉O*>5txx:(湵LE IkW60Dd* T@ kczdi)mx1Cμڒyfp/>1~;`oҹ3y?.P]Mt܁$0}cᩴD #V-RCzY _܆>?g{(y!%4l `(QK@RpsY9eR &N &P2#E=k5H %lF팜ñz܉&oM3-jc4t+i 0k5 _n6}.NyWMdpAC|0d,EQH4a :\2/l+^1@18 [9{vU6%CMajH+I/wLTiE'+}?CK_ݵe݁-6wSiid̳Ui?6e0?DZ2@K,"\be8Y#a՚2(GU`}sfD15O{|}Zۥjmuێut^p!x2f7y -U6xP=MGf]cjlޱx6iWڰ $?n8nhZn([:6a7{j|V]dsdo6 enLD~z- 6oP0ZU rJ1.:%| . j:H+ƍ@P\Xdp tZ3+a`EVy`uP8^EZS λZxewƄ@ʃ F"NxxOZr(xݚn3>ni]Fۼ&ՁL*a &e28\r2.WFGdQ2@ QĠ4}9x?#OT,Y, 2I)!kDh75/6<6{>;u٦a&^y,aQUyYz6CAU;99;'5wS:ISr11;pl>*_ZWK\_'ğ~d<0b_yg%mV@+�\ L3%|j"N|z3`oN+%1;}_5gIVEmMҺZ6Y#e; v)s]ɜsoU%Jq1;p!L"`q͍ ȇ6TB~V0뫌ֳZp^;]ohͶw2d*j"<s@+_8#^9T@>!%Q+'GѲw-D TZ%ƍϞP4F$p!\E'ZUK%. U)*Fi&ruPHY"uN<j[4rLv9;U3F}>)!Z*iWvz*(dּ;t%/C÷/cnޅwzr5/tqn_#r#iUjts$ ߧlmNhbsjanIy'7`.wiy|۸,祖a:ly=Oi;ӯgs=i-ͺY̚;_M?? n][5}vvȞ57|Ud_}Wߋ q < d+UQ?J~]w7 \d[u8_ >zvWnsb\qm$ aR sLxʈa"۽Aou肏$Ϝ*..ޅ9 csf ۮ;Wd$KbZ|n (??Zt[s, A΂&׏c nQC0,&KTVt ̈́RH#·j"ʀ#OLS-:k/\R"LČTz͉#2%'ơD A)2Sс(9 M|CQmjSX-X(ST0@%:G9,hUl*9# qQh*݌})-3 UOB릫ym+|"/ͭ>>k4}D){$HςW($#,)G*2.I '!&DR%$] j #F:Zh|IkͿh>C!Q 5PDpJ;.,]\e˧2Pԩ3>ydQeE:GB*SÉZmbD# +\`mÙwA- <ݖZsD +C┡^{-TH1N AR-9Tך3rvHVLvՅ[]Hi ^x~4ΖqKURoM_AaGɇp65D 3G$I/3LyD^ݽnL<B^! H II7xD&SmdȌʌ8VwrVYBeHOEP!*0)릩H+h`FMM4$#\2H?YcȢ5v5raO . Zܱ֖v`MiXIJP\[ňueLY(L02aU}JUC&dȁ(!]2Zda&H:.P%ɩ&a}&\aϚŸ+lFN#v1r@CV.8$r+mP6PF.2ig;ԆD2EϐE Yg͍6Fl F<̴ԁhVx:qɦzV֋Ӌ^ de<MNd$;H-AJ v'%eQ!f:zqgpb-TpóGPa;7|Tg~\M6/pZzύ_˫H66I(ф^$#d캼>Vs@Hs;TPNBr*P dL.`h:1+ bg!b̺o`>AI$#@GQYQ״ծ?$aLHģU!=/ۿXsΣF_~y{.,㺞*@V] )RZpI"7ԀNAjbUPZw]V;F2KBD 24ᘲ 3JKw`&!xI{wMT,ev!qNCq| xm$SRԮ]5tHRqRU8)ٗ#$W/ݢ&=}9qk/br> "B"\YhT9(ȉ$T[ҝVf{oF;9{IH1oCvGS5Rp2In!"-cxOY)?.ЎqbLCmC" qL* ̢Xde9FΚro@ZFǨt{W'Fg˴21uJ.ɚ Fē'd=r8 :)z *53󿲑غz#.7Jzgҡ\ŘF\ ѢwPԊO'3U PxAD%GgZD2ҥRGZD,JgG8(QLh-(uHR!x3m0MpND q7:b@)HLMh`5,:"KJA줭ގv>M9ޙXdSᓕo'Ev`u,Ȝbb R6[)LRtߵ2xYBK^oɿ(SݿIvA MRg˨L)51bic۵B8xysin*NXTVlbHhd j)#+UV=8e<2Im Jq!Q:R9P(X;kmD҆ 6LbmX1"ifICZŲ}A=O.u+#ay|t ӂՇƥ%>8W}3>8Upa轖V4A#M1:.:Oƙ|:Oo4c37> qQ $)8" H(&DUK.q["zx!q]6;<w!Bd%kIK]ʼ9kDYLKa{ -ύԕA<.vď]SCՋq* v\U`06ǫ#'y_&sb h5` B)j/ڟ_L G^xOWчs8Kj;{tH+χy;r}V=27[Գ%,@EKf(+&"g^"g__^8SI] ˤC{wIeI dә{s#ꆩ~WK_ 7ɗm0`PI|M+Ͻ)З7={3Gd^w58I-)\b{=CNYRɈQѪ؂~ 堕ItW@"}.G:gwy4>[9ʫOmϱifIIi5z_||4Nљs% ҢIȣl2גs7):?\7V()qcxH$8[ҥTY,3䎄dX9kvc.mA˲ zN >!9+,z ɜE2#(QUR&6 Nn(Y~$ J1Jb,Pʴ TtpnH" 2QF`@?| }< U6(r?O^I'KºVXoW 9kkm9R&%Rur]3[Nv,?ǎ۟4#HVV,$BCbCI{d&pJLN &KRڽd=I υa̶Il#>7W@ʎA^U<̤Iz} 'W{1)ӽ|V8 Gr9|7hB4UxpQ e |/ۃw[E2%׳S_>̛^jix餶z}]f4_m}NLH_Cs|-r*BuOko9N/]g):dnۅ"bE|rOr50b$ʑ@=aa$[9"| AL?hu`bD<:=rr4 v\5wJG<2"i0GW^=LNho1R0pϗjL?݋GgĎ?ݏ7wo~υ޽!'qdV$ -5ueiC+[Ƹ +mڣ8I7(TY_[ɱ}.׾zi"tr6t4WTt*X*VXU?/*ŶT(^iM']׻(]t8 iz_Z,PbF_1 _u0g ?[țX*Dq/+7~t`p7o4qLF .(Qt% E4̀ 1SSԻ#3m>eVw8a= T r hN+AY*[>xA BU9کzx7QW_K;0`qu %g Tn#Ag>gpogEQ5:a\΍ M.-ͬǪú=__3% s%9xFXBL آO8Rp@sJfLNWƿDIk602JnCe2N %*2)dge[5GDOWQٻ6$W?ecӋi v`<6yʂdRMJY,CTYm-V23*ˈ8sw1n#@,5ޟz,ɗ׵PRs'%w9QGB]s(}Qg=$τa/Pxc`80xe/8%i*Nz.gK 5dZjesiꆌ} V>>!wVti)M'+&>mW[qWFӲ"t 1-cxkVAJĸ \TVrpɐEQ#K$ +Lן=+g@0@y Y&s9gHq$nhYyeGU8NG/Y+@ VZHHGN*DE<9 ̛5.pdW|зPX!5 m"EJQmo&"+K5@p7to\M2!F4He.IcJGAFJ>+Њ_㻤H:<N#:Y@fiQ$hfɨL{/|!ߺOϣ r|,ܛɼBNHſL i! 0Ue!K-TӖD߮'WW,ݥѿ'q/l\%IH<S<uZ"Ӎ&1ɣ`V~ruqvĕ|YaZf>'%pSrW.MÊr^OnjG24nk\eowGЕ?*TDiwNμ[yv6m緝=ovLR7|(u6}S јv3g6g3k(ꥻ[<lx1u],υނ7{r4x惆#yDJH+ >s-dâ1N*4lfǵv,Q rjTqNz#^N Z]Me%'7r<ǃtQclH"*J'%潌I W |j]5ةj=efպv*ZQ/=VrS2T2f>V0+ܝ}etWQ2WJy"$ cd8F :QXGrz{1>ngYbM+xkc?1@FE$7~(rz""zuU_.{Wni].h~+TT41>JEryъ=Wmf7j)1=QoTCF+x_5hY-^vH _^NLZlM@K|2ɳ~N1Z5IKzhH =!:S~[.֏by;==ۉ=O6)FBP,d-d,2LZUeOI]XVRRanS18-H\d s29rґl5r,57%Ix>k}q+;&ǫ1⭭syp} 7cCLȍ].9mv\]sG1kI+Ẃ>%ÉȠ?!o}D"Pn+)o9i> l]xmpPK#t2Y38o5^7L]</E9Watg/n1wu,ȶqM[=qIR_092Vos%Z迤0ؾw?&^5ʐJnޟo7EI´VjuBB}dL^Gvv.]\/em>Ox@νӹQr#:N{>s }h-;caak<ݵСݷ:uٺ?N|EoDmE-=;7lǿ<+7SWh F4юؼ7&7{R[騳9Iޗ+q, v -|qR Ij˺ l Z3\{r-.Ө{[\Wd݇4zT@(ss9܉*^eoe#~\/aqZK /A9sr V(%c \0+f@۽OAEp&ٹnòN<.-0 5݄'+>ofkM#QHf|nd8bSG2$Xij?ߜs,׿{ПXhYm}6vw oPҔdrALBdBFFIǒW% [(- )yS5!l^bJ][#=^[Y)ÐzP(AY 7ғw^(%!p'bɹW(9'ki%̓XV۝>Ěr~zdSxXbDS%Dtܤ ;,1-Ue)$ZsE &jxgRbR')'%H[*Tjliq?UA$);$P, k$WHʘ zN fr{(U0°>c0FTJ![)w Ai̳db˙5uD1{-m*JI3f&S)^L]% U>Ef ~Vzr\*#א.yb˳ڮ,Y(=pqnи6sK0kAS lGNq JSi i0(SR.FG5fKvHϢ2F:Q[NcH"fC䉑ʭ _jqb"{xG7Y{e SQM6P(/nɠ#*JĬcUAM*mH yWLNx-$Jc+kL,,m<ڴMKI5>ٱɲVr&+իBGTEQ)Iwa @2: t+E%HVoAb>Oc$2&fAO &1[/&gp:Eŕt`X-"n*հg슅.r>`޻/ߗZ)X[֖Bwjßh#" t,ϵ:iNJK3&QN@Lʺq `VT:( 6L36 (v!eٌ~2K&fo4Zܱ+jʨ-4صy0 VA*J e Q-Y9d hgڛ|Z\3*7ȁ,d 9d҄51D&(R c$τ[~љc_슈Pa@bJz`i</ThiEu*"e$h gLi 8!Hn\F\p-H*)e2!6i1vf`5r6#O^'\-YKvESqj*gknV̐$[3z2DVB_;YN698>p7zb-+w;*vn8c7U|]xGEPW ܎Џ985lJ+/]^$mL@xuG0tyW`y $] dJ E $[}Q!C>GB%j yѦ #5dֽ: Wmvrfw>׋#ccJ'JaPZ \qL ĵ>0\7k(b2<>`}hIkJ ϵ)),,ZWW7oHw<-ұSU!mȗ߮5mQ:oOC5\[Pk)cuDM3LiU8\ZޔPah6Ҝ U}DpEK GW\WZľUrW\T8-կVNMkNG?.i&9C /bxѿ35a4\χFz>;pp;bpQ'mИo ֪#i%t!tuaP9w Feˬ}G?znL@?ތ{zPsO@K_FqzrSн&iK*uAm:{pg}VOOn=ⷲF'$uT* 1?fWWSw4` ӱLXs$l& ,gd^Ų4Yv&̯S\r | >SySzfH)}Ϊƺr@r @%fwx[?_mژ5#}pDVJ!X3ͱX)}J!*9Ӄf{5Y v\Z!H`9SZr%AO޵ك|!=*iOi~gPq#ePB{Oƾ*{!IAY p-`ХffYc8b[N(WQA*=Xn5]}<ٱR:fw'KxT04+`s2:c,^Qct5s$6R(8)r&K M& LDFd6JyrvVM=?3kuvrIbNZ%)Rt*FmZ'#Yrȣ譤*;PvFL |:$WOQ_YRBp @ QgΤZq !!s&0K/}ZՠVXN8isAȌp.,V+"@6mZX"2%Q80+Qq,`XkģLP]諭'Axih!hoFȤ5NscIm 01#ϊF6[k"B]SS1ELj*X㓕oMO'E9Q^dd?H-B@jy( c+Q Q'*FSj0SȨ$&f)t%W.1T`Nf !(eSݜ;' GcM;O.GL1@FE _;t-qvCxFe]̾-7 3v"uy=_q?ǰsr+#gCG[/OO?+$0E(9x)\8ڗ`pH]v2JF&̄ $ܓv=Dvt4;T, \U' G*@ 5u?Yu᳗4 | 498gkQ e&tI(|6. SgG*U{i`UCo_I@ zly9RopPVsWo8]/Lh|S(&;d@cc,a1Cp 7C<=`LܵJ{@*Ad˞Fd,2my XduVRVML WAIox`ȼMU(]cїU7 Y`,pOҘ'HJ3kcjƈ䡛-}Hh<-{VM볚W$&\|/$^Ac1EqSҶ^x_fpVe甌3*6@sOU&ɳ n8JL8ƙ|Ԉ=|,qc9 "2(9zO,&WZ1Xg섍 Ag@ P8 `Q x DY_m|1ܶ$-2qdEJo,^1wpָ`)V" 9%e/'V {t3|5bi+ߍvYU4_3[׻Ai= F_x|DJ`ut)qi\R&+ejs,B۵ٽ8D#YBqk> zAlLF>Z# |v6h9k ʦ쵈gQ$IH:] rI ]0XfY\IrܐeJ  ќk"4!CU@iv |6 ƺe{1:k\cISH uc9 kvLn8MTfPWQ?mߋ<*xf[~* TH'l<]=)~ǻ߿ߗ|O>pa>~|CΟh)p'5Oק>A[M۟oZVk[7- 4|v7|}.{|j˱m hjnm H/~3/B v1k7ڶOeM(BbƋQhT)U8 1!օ4b" 7FFwI|w%||RʳD2c&֣AdiLG!G0Zgr @W>Sg\Q67'm5w*K#$#VVSl@::;acAqxWӉ1;_S%SYRњD)DZ)T,ͩկ?LSd~>E?˭@[o`W?ig-SjϣrGz^n5fR R?ha!șmv,gI (<eʉaAȸI#ye+@xi !eɍe ȉ #?L$ +i\aƬwfPc2}P qP:Ra΢_ 0>1yw;)Ρ:Z5dﲜƈMpAkD;` y2CL#'.sЏ#tC:u&EOn[=oY+@ VZU2'#''3'( ) ZXG6X}5?@2XE?jv PT;R2^ċwnWAE!_~n\唺I& hs"/ sKXd{eZ1\;Tu y44Ft޳r]bCP$(b Ab&2ͽ GBgg0)xE]c#P I8`@gYQ`7fw{4OY-25DP=\7^/8\)~(ԯedu1q˨iϗ~[9:F: '7kN߮^SVV D;$4M}M0-VCv@^5" Նnuu*B{UQZkʙ}5XvW3GNP ;n_w(\OU*~#ЧXDBݪK"ipt7K<bbOUb:AS_8݅s3=uFݵSbGek8?uvw |fttM3@]}(eGeT&g]eY,*J:q4*;u.6"DՂa5䱫!ȭ<>9YH+~JpU`pdqrH$G߯z%Ms(kόfjzߒڜ@ϮB3B6,Bcgv\k2yY[(FcY8eTڊ luf7yhH$&E٣!t(݉2:$/;uЃ^+30䋦SuNuٚuDZ.1ky-XZi}wC:n&O~9 g$-Y!*4{L(gޜOǻML F@u;rd`T, J"@B-m IV8Ѳi$,F b:P?&E\t) ]0i ( /<'2b |0#k}:|N70P󳃛4x49=`ßL"7d]x|;8JBԛ'`f9饰~Sޏ.\0v^߻Gg2^|{-ɚ܄,CcFI. ӾoJv0u(u"iЎ']i(BA g\s-, dwѸ:TIeaQ)=$rHN;reyO$ww{9k |lcL1<8R J!\p ,6&p 0n`9f_ZW/\YII3f&S+9%)Uά%f ~zz\W.ibگaG `\ԡq6qzyO3Tp(y5ɡ4^CiJb!+u(EDJD7JلV-kX#=4GlI!` 'FV(6xQKj٦b ĠVI8^rUd$n 3 9ru Yy@fCmF^D cZ3*+j5q6+jj4mǮz.W';/[%G^N[,8*oQ0R9t(8:5$,V%HVoAb]tet1{ Z䂠' ꘭}G38J:K)Tmd&ljXXmf슅.rc[-oeW~zrI<= 7^^Mn_8b+@ǂ\v80Q0Z1p*G`R-SFfa(ΞIz6AWQg혱Y@U)MeĮ&fףbvƂ5;vEmYeڽl*H`^E)LpA>%o5AmL{ ^˘aF]9L!L&&HE#ap>f,SMZ{O8um 0v͏]*#"#1r@ii.8Dr,;F 5EVmHl`2 k!ZS7N۬4t.!e JkL-x+(@VL]\M͈2ѧ9Wbfɮh*"ZK F{m2 rr >WAӝ<'pTq;+;*Ҟ6[g?)Q kz>߳pn 6jBQI6&Jh:#d[$(R@!y!;|(nń M6 t 5Zo$pE[c(xf3SRT_Y+37:ZCWaJh4zX=3_|=6*tb^1IA8!y.ǔ@Mևq 4F)ciFXNևl@):A:! Ev]8xauhHʼlю%2_yxцzѧO|?jrҪ_>YZu;]jeP{o3_gK {HBFoU#] a2zsV0gAsTš^cB߇4xdS5E>w]]upqgsi˒;9=/#V׋15KP\bJɘ'Oiỳ{" L1FAwqjֻp ޅ9 C+xB,E`'[œa-*u"6= +dEaҐL"S"<\z7WJrvd®XHYWX\d)jv*pU5oR\zzEped~)඼Ol*7mwtq>a)Vox5Y <o饑FsHg3pppu[[rw~o#ww{5l_}0q诿,Hw&4pَч<oax.Fד۫p.S!2f?}^}e:jcågA`6KOYu/T{}r3i,h@Xcd8 -*AjVTl"EJP m Z= lq:*BEJ=\R[~z'0#E{\zZ'RpW]?=E- nxR%l>+yt$iMޭu^AӦ݌xP R\S`uJ+29nj4~yYթ,eQ,r.Іd?L ~˟~3s6 Qۼ`M $b9@z.uIuT:J㐽G^V͘o~7, Lz:E|q{s)`.2fsutxp$P@FXGgK uF#gɨ 0L|ŅQ}{{Fj)a,;a{Na&EÅ9]u$d0K/go۶ Rm/.z9nO=@ꄌ<㿈+T"-EʥE?2mRzz5o{TLGB/+TPGU0F6o4;2U#l_AD wJ]oVʢ :*iL.tNFH 1hBLB' ]&+$ )Imd묭U`{n1! 5Zi&sm.jl.|B]1# WZ̈́ffէoW4˓+f_se|z.U]gV]>Q^"D=۸:;uzgfvocJ'sRcPd:C43E%)c )'1"&z'ߞuFt.J1 V5$Ϧmf5yxaNJ+u, :Q .(vLD3{D8"<0xeQF선A.lɖRKlZ$z*#}wgV0M}%.7jI'=};`6nle!U-#3WSh2CG( ,^O^zVBOQvAw֤YϤ5pbfǵ&Cg;^Ѹ73VyMS\'9}]y3$Yy!:C!MU=^g FuXDg &}QUG (Kq\}3A$j[ %Gl.e_xcTw]o^t|_E~cI~GCMINR]6g\ܦ$rDp|q4Gٗiqkn2Cv\\$ǞnOo<~~菓VG` Xi}VvSVӦN)3<Xl:хIT"NUWPGbz4e2y8]Aïjv5Xk=Q4U"I<++&55-ۈ~f6 +TJRg;_{Xz t]OTT*x?W7"ydaE99':,f~+:Vű;#(e'CL$Vј2dYa_6h{Uqգu/ق[aq&E+s0yiXGMq$n%Vy]Ijsm܁ U PX%šW&>oL[~kæצIJ'Mr?.Yj^L 6oce&HVֱ^2Pkmx4 kq|CFMʵ`.?xF{O^}yA&T91ER 4`) PQj"S-wy~zΣyP 0.yYuynNP͡Y-jAV5B&7uE)%θDa4㙒f(-0TҌvGwOMk .||DE=!N fl/:bdSJlHSJE rWU)Le1s#L/G-cVOZiM*S фAN"&uzz#+@'A2 ǨU1kQRk R5l`>9kƺ/|gtsPyy(VX\ˇvk}_nxxWzOVwݚz˭EXJ5qkE=FPv$9vBh%G9 B9Uw@p!wyҺm5h)  3âҌ;$vD0`H&V;AccP ] ؅ͺV0QpFDPՔ *x1Žj`ڳ?hZg#9/7[%pψKzEdX88Rt(e3­@4Fj<x:؀USjAJ7qpt~6Ya BD"4gv>YYeU>0 IbcK5ݫӟw{ N ̂᠏LO!c Mk14WCκY53v1nSvoqV9*]n߇Voȴũ Z($'>W̨\=*֥]J(A菓Zn~\٪aqE)aVh$50ăbrTM=czYLIC|(V.Zv DPd#,xL nJ裆5e4B2/vySWFW*w qS^rӝI'pއy|x{5A88C5Ԅ抧ql&S<`s-:2+= buV:p4蔶]ff`h;oB!3qV{1gN N5h4^&tPX8^}?ߝy['CH,$AtDv*re \>ݣIq54, 8]H^"7㔴iH& )bc}2aqXʊͲUn~Wo 9\y`F8L0gqi/9v $g\Kr1q4g6CRb[/#!@زcr\Z+8.Dr(Lef[[gBȩw>N|Rգؿt|Kktfv2~]_N_/^2L tJUμ(}1:0y FT!V TS\RWqϧ_+M~A[ago濝gخ SkҖvtfbaf;YK-o6vgWY[ 2sPzQHz>׀YβX;<]!=/ف -ߡ\жc&ӕƢb:$Aȴ]EEtaa\"EUy4nI]5QϾðu DiD̓,3&DN srHcI=-Q*=Xl,/takknm{AVȁB,'Zk}{2?œҁd1!!P۠d%X_)!{[>|d83`OλR$BCN %R]琾sH{"W OGK!nl|؅ ] vO{v: ~,ar>(yp*һ7M!a c˙] 9kwr=zgukk=IdUpln3bƔPWKGFkͭ5q;KǼ-\[WTwwLW a"KI#8m&|%|1;dPΩD2-7bPV!z|َGs2C2cB'HZPMRV(]\y)GnmT ) XVʛluT,p-",7aa2}5q= k%uO+Y]jl4֋<%uAzQk=6.?6v)XvNX;ԱZ{j39<;@g>.趟;W{] ?{U9 9D*ʠA"Yy\$[!Nȱt4sy>rl >}`8m`)j!nKN.pC%߳7u͏ߛpZօRϲxo/^NO烫ڂ\y+'fWJ%EIdX~lTLFы\ 0(4 Sué{P # X.=p^)KՇ5eQ.9ˈlTV:4ѳ_k*ɖoPi(Nds^Yd?H-lbiYRx(n(0V c팭;w:ZӐ}ZxIЋYYX;ރn[U+q$Jɪ a(69P ֋MVtyViqƆZynq*:VR7 P%P2}WΥ^3R-d.V*f G%y߾HZpyjG5-j}9o!KY$wF)KN^ Wk΄% ac9w+ÍQBFT F{r-]V% "; ZLD],n4҉LʌBL ϸFcPZRI1 R B\"p,G8tBdn|ywRv;Gq[߾oH(5-0)i" oD1T㘃rIh|FӑgG*u Ktb`;ɿ,G8TEm/^Käc@ eY1$7"¾cFt=SD1c^a:F)0O"0U?pU5pUD>++`}qm_H:WEJzpezW$0* \i:\)5j4W洑pܠJ<]r;~5yGu.N@ԱLV?0?XM'ŭM;?V昱7B#Z0Fk#v15eW37~e nL@^REyFg뒄U'gJt,ҔK.. mQy@$' LXd6yYSiM?pB%ZYz{0y6cy^dF՗Z֬L^wfQo[]KM Aغ0 ,Ht*|t>3+ ͬRͱu1ey̢fcXϲ P9^'엑*o,7 hQtFޟ"s"iAu=:+R*5Dg0:2="O7o8t}9\޼ap(q'8P!㤔,0+3վS%69).hKMgE-Frhӻ+9[gUq}5妰|BH Rv5$jԎS&c9,trvozzεEasؘ`s/LZ&EJ6BlF=T** \x(I lWW(mH\t_H+tH)W6GpEW wU񮊴D]R^!\)+wEou`WEZU҈^%\+2HPi@vWWOOC P ~΢%2bB]6[ \ PQUnV:_~~薛?>8/dU?O4;gthius3+.0PG:" ܅ uq#yqw?W3Fy͠Ә^aە?s{KsE:pDְ]-FSKR<h w(B sRvE +)1=^+5.(nFs|`2W17ׇr# 0MNf NJ0'QFxkoAKRUIMy]w5q xޡU+FŲ{O֞O.C$@7 $j@ $wt W2;z}"+hDwh7h/G{/xsT6cg\HZnz|c4鱴W;˻q?RfkS\m ulct̒a2%2zNmd_r1UwвӾ޽p*i9Ch: Z +Tr0f5(jKz X̘\fHZ_r(3Ԣ<8VK11kO(fmFqɶ'ck Aa&\|Qgk*0%B̵ L4VUilن-9lbTx qu4~_0a ͔]h3ep)%XX -%$ڙ)A_*$70b.d]=Al:d[L¶a əi0d9YO=7 ͥx0fM5ETk ]05v:hPeXFL}j'v Δ~YP4AZe ,KT[  =h AJ@ka24_vя݊l|Z J Tj5 %~dp-C^m {AicmMsluX,ĭd] Dqah a6ψkcpɣ IL~'XҲ5 { c uhP'w/CAQI|*v3%Nx X97~\*`P2rl(`踓 $Ia\`8xZU~%ٕ}X;N͐jPo]*BA.&:FŬA0ཀྵsMp` %ZMz!BEG:Qhl AN&]ŊVGUU]% ٷ NqVt0Wl&7KJLޢ5ضS",wK-ŌLۓAvH =>6VcÓHcEєB>lXx?v./ގbE]n1ќ<48jG2M0BCLan_ش..T񂷪`Vm뻶GZXI| (#Ky$ *}vrWT$]+O.ֺ ؤEtX񇞐v  ]Q1@J 52؇䉫1P0n0G0/\5᫮@-3ԭo2h w3Dh4եE,ԀGwf7v#Vxe;&{k_6H jwulSvsg'0%$,MXuOnBvhc.m*5`}XQr}\yb#ܬ~ _I aʠvp.%Bm9GKշbhǎ@`7;RP6aQ @9jEkӬyvN2БF{>nP:L/dM3rƖ $CA {(FwpaNS cE9aQg4h!0c[;rZ4=k#iIp5"k*i ޺ཚ";BZfB[Hn|}Aώ[ȃG:XwM<]ٌWmi/ޜ[Εdc`00-|lыm䄲 ]4PŴf1009yZ8[7[9/-k b̆z[0g#,{.zjeE y8lT;K~A؍:zLt.JtVFr VRkKXv<#l[ƱBV&-aK}ZOnĝF9/zSCwLf*UP8'cdT(1#L #zIv3T],kP~x"mLu1c@s6)z߰rJM@xETAYmx*J5${dރ٧?joǨP k (R^tnca2kV3_,N M F;`%p]{:ﰞQpJC1@bj=…0y7C\fK/ypU]Y bՆXMzIٲH*8ėn  V KkRC\~AF+{Gc#9Y`j?.]%At}z]9%Q&Ф =70cm`Ln\?r++q61!p٦R6J)=:@rt{ܩ거q.l\؇g*'0n^Mh=M!QJ49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iXs@Q*kEy}=9 p䀄瀢<1怘b49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 k^ܚr@by=9 u~59 yF>!is@fiHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@怎5L-`S4tD9 tĚ49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 K%/XN^x*Uo_w_~u1['" vM ;g|l p#&9|l nYcK[cO;-R>Ŀ|'⺔75;dI]f*WƟ'?:9?V(:MXLrXKG>Zfyqe|v{f['׳ۥL:)p_>:vO NN>~I\4>E.Zhl.0ml<Ѵ<v>Y o ~w76~vi7OMҮe']0u{+]=vmp~sX ]n26:] JJWGHW.Mw{*]pϛ+f-tlӕtut%08z h]NW@IA1EiM jk+AKJP*]%]M^`jJFtut1iMtֻЕJPS:BtfZӓA.'>t]+f6b7֜_129GQ_7;kN݇(ûqy]lzen.Aۋþqs)/m}áF!o^_]\)!|kǞ>gwnj4Ѡ, {qLj.o1s#umF^$) ٰy3,M̲,7ɶ jYvư`ogV6Jyn`%F)7G[lȼ"w(ո3WަpLPݙl܊J?iͫy*.NS1b8FfO?'zΓzxfYlͼ^ _z-В;^ M<+{Mhi׸=Qy{t{3Bӥѕf5tBW@?tS6t,ot%SX ] \^9:]%[tut坉ή78^t)BW (JWHW>[IXO@޽;;"@7CC8ڇ˳u1k(b՗~zq$pӋ~:zaľ{&u}mam?/)?!?B>tM+>Y] c?wm#I1 ~?w{]0 g,jEɉ_5Iɴ%ٲDrBd٬ǯaVFP]lͲ}@44P"[Sg(6\\KGt*i@yoUGY4 iiHr=+LφIKURſquLXs| W"SZsu>ii UY&j]IZ#hG7llϲdkZ"X'*rDPK19t$j!T#`ՓQW\J>YzwˌߣD"NcrGr9UˌߣD'OgeӻJԒHTR֫Q])ZQ 9U`1;V*%WݥݚSU 닱(&6<;ʪ@Haȏ_L5g+OOv:CV5[},+rV\~t%LEUA9μ1FmUK/W6(Xg=V(5}Qn/vfzVdo^'_Q'-:5|ZFխiԔiHb܇1% ?"F*yѲ^V[Ⱦ?H ,kYnBDlcTL4ۼbwSHՄoAO|uqyR5i'+C>13wї7oA02 2qdΨ҈Ȝ ,Z}}>/k?ڗSP'K#W* _ߊYoY}1ThFo*RUn,tvG{V-^۠ds_%Umu}</n]57eHׯlm:{w;з~ǮcҲVy]ֲP˚thr\Hj&",cXFACL#e"WX8Aq⭖t@G-\$ s`MrO 71ʁӅ(O @DŽRncQ( Ȣ`;X+\$JOFrVki\R+ñih^>;] `l\*k]A .JXjգԯ5*v汳tᕙ.iQV\dUC1 Uु98^"}jR-*+t60ၖ@b>{5U4ڠxv'})!]4xZl^jxU:;=c"O/)p~r@?XV fq:czQMggJF 3=&F#F7Ǚ319lѡ̝zpNt3EG#Q3=u68¼+H L,t @}8SGC7‭iUɉ'{gqoR0#4'(xAc83rnǙ^> tHl\iۛf©p]H~3`}hz>0P P*D G«qM9=V{GJHk _pra@}= Q108Eḙ#:F j/m$ztP0?yM>,Q+mϲcX-ٔZ(ڜPK1R,}餾I*Tw ;~ /E5fYuhOmyͭN9I >g›\Ys%cq<~jzc(u4혡xjNޒ"3½ 8eXńF\W:̑F#Uu:N( 40)k4PA۠FH"I`ci9I1in;#yCf8R2o`_1lH]6IVͭGӲ}1%Nthh+SydmIr11;D8GDx;̸@jδVdG t/IbZۓ^N>at(44_V8{CX%ʙAPhRZ"$<# FT8G"W h ԁѴc9댜-嬒߀UZQC6"c",xFxvZ 0A9Qty xo2a 6L4Y/]!؃E~d[ HF`/n$ *(yYL, AVgPFIKxx9swa=Ro#q'ਥ XA"sM@<,f֑(l>SmT ,%oE)1-#0kyz,ƭϳ(4-yC6fZI"Vk,aQGoiR(g!9su7#ͧsP3UhGƋ$[<(cI!5;%]18Cvֱtqt^&+"yU$o,F/K7FW@"/G MR{/'/\Y0Qjs~;HX̵Q*g A8S<7v:4<ˋGٻe^Vx}("ĽQIGh !w"sN:v[c(^iGj~9^ wE%K 1댜-v/^SH$w6o+lWm8БD{CGWd|p"gkp -%&i3}zpH`M\bi,*xT@p[ Q wޑhH6Cnd =!"li;SDcK)ܤ"!s@uB3rnGŁe4ݝ8U%X$\*zW =4)ɇϼ+LEV$:XI]EM q Q-UNAo߈CӴr62+rp+^|7H%'=enF/aјvx sZD)3ln΋i)Z{f?Wګ7˵&4BE$qsL{ĵI !<c\n:$&Μ܂)6!ߧ<,#7=y$bOnqޠgsCzsO Q8K]RԤeH˞+SH s u-l*xJ}p(64Fl@ ccqb@{GO⭡/aR/1re)V2JhA YhRPZu0ip3rnǿMq`OQ7/c47OըwQxɇ ED昅d+SUnJl(&í^#y*=g^9IK08P[,"/Z~^1|O 3g'8Iu9 K,A@Sh@ SI15NC4PA}j\$Q{hiF-Q@ #qXg=V(5N+RR+4%]6g\zs9̿l8dOn<-~,ul [_l 7v aZsF:;Nst OuVoa긧;̤//'oUK~) _33mUdϪT+X\L1(Yf4//^{QNRE?{Uwa|>ًٗ@ #r&#pM2 !PF\&X36)KfqȪ3Uӫg+ _uȍ@Ls0])lJXHx`zQq 鯬W>kjLQQlf6 g峊tҺU%߶b-_^[t 7tuזT|`@?K=8 E),=7*z$:Cx~߾[*ΆxJgMu_'e/ꖿ}4傐ѧ&EOZː gU`9+&4< ?_鮊ԯ[<fKnzs0M!: 'mG`-с8ӓ}40m}ȋAlʬySE,lUHw*&)O)_1 qt7^_/q? */JPސ.A%LD7k;W6ڡظΫe L6X7dpȪR@[*2~;mjLƥDU5^ h^XUqgv5Qbfu槽h|r܍/j]oGWwU2`rNnY,p@DX&R=~ÇP4մF8L {z L𙃷 8PX&Z;KZj0xp8=8/Ҳ[ЁoEQ8ILHE-Ά$!J˪$F+iӫdc%53W:["Je]ҰivZ[fwZ4hORb 9}p tIx1< $?NpE_5:ѥAmDz#*HLH:q ˸A3Tɇ;ij^z$;I$4-Ѳ Ύ$rާj w9l?Y5lLF>Z# Ȏ&lP6eEdAYw w.U3&e Vһ:FD#cIkLn8MTf'$ ])D%LL <xTy8f!c# TWqzFގ>1}(\G]f0 1~'rw7~vFS?cؑЌ3Z#a|or4H#0f,V"EE.Y|0?^uδL{.\=`,9M#<@qN1&.)q"|%G6sb!IVsK%f:0mdNՌN-C9WGSWh>wjtZC%W6#|y|F*q%Q*jruM+z~Yg45j%oϴz]̛w.ϖ^o)J4bl^톣Dm >s+d%`%"l TbT'FMe&NίV=ns~|{!ZxVgґҰ$,[i1?bm-7U$_ܬӣ08#qӯ}{P~?ޞpaN^듷oYO4)pk݅_#:p~ݛjMS{MKSC#w{W}Hovsk@ȩ&<(zhSgMbQ\}el߲n* 1!cΠ 1,g(g1L&K`1f>9:#]@aeV}GC#?iCT泌7ZIJ:Ҙc NYSYv,*⪚LqeJxGň>{b{Nc֫ʝ1L0݃a|R(|zVZorrcdI<P4SdNhM 6^zd!(NCG#޲(}vN`ScB>ҡ#k.)Җ,&2D'R* t\b q%;箛L>^=Kכ7E(Y.2E1h4mwoO]~n>y7~t`< yNxĮz]2B, vu1@>⇘P[?ʮƎn͍uטq-q!*T[ff=++`P/鄆xki}Kߜ "l.gҳQK'hDbc&#LV:^P~')2.qy`裱&P@ 0d똄Q DrǏT[R=B.!2I̓.J $\K Y,o ꖖW/%^DG޸[z?]=Â?MI=LVea].뭀vKSmYiWHT.ntg*_o 6^\^:B:P7fO*:w./ ]L1t/A^˟W38yihm*A^~ZjF\eV/juv |a|_R?8Z*1dJA|]G#$QExft|Vؽ ^5tV5ܔ|. @*MAZ{+>@ RCx#gό#nN1|"b(-3PiL翟^N1Z5IkPzl\{/=!2$hSݬ04:AUo9bIg_C_[;)SB6: 0T`̣ePUu'`\JJfa۔G>刉"ٜ!Gwy@Cϭ"( a`Ћ>bF bAOG-YdTc߄b.`:fEt)W82ږZwJ5[XM2-|/^~(ȸ:<\ϻqAϧ˯b+@ǂ\v84Ql(e佐YjS @V` # C!{. *b6A+Zب3q;flXx!eluaA1{`kIǾV*[m`@Z@U@e ⨖jYcIaogdK&Ph!0K]Fr$LȐ#I%0ApL%4s>fY RM*jy~UƾXjE8Xm$=\pD,KㅊZEȠNe#Hm`)čD6+!bgHق @]P0Do-b_[eYڢW&hV 14~H!0HD)15IiT @H޻@HQ $"[Q!C>GB%ZZo9'Ngc(f ZB (4ĖiK$J8  8ug@șwM?])Q_LݸT<9d6;tnNV~OKccW(W Jɠ0R$4N搼ȕ.ǔևB suC *Ĉ:ey, }hQP ϵS CպQײI\-ڱ)+>Eћѱ1oGZ?k ,-qݍ ܵ&w;rW~D*v{w{/O5bU  6zs#`"̆,;bP BrÛ4E2P%<s$7hheqNz:![%>/Mk&-MZKSOκ~X=GHSN"3%Oeh& Ja8*ga#XXMUG{`FA uQ9z] 6*$G18-)0UTV:$`l- K ޑ  ('E[Xeu|G .8R TJUHhL zz\[ u$pyfS`Üb~W 򙘥6VvfK3]LR ;kAI结YH=%Ls]U7|~. V$?N\k~Ia]$?NJg7$_^YzQ BgB"ΒboON(s^h>P_2ŘCoC:֏EF^sy9m>١og>57X|Ѭz2Ό[fm8?Ѩ~l},Ee\}vy2#O7f}(ֱƚt53D `GIP<{gsLغ״[:z )LB`+$% y᳐A}2%䚊3c6eV~sTJc+E[2G..LS<:5>ǣ+W'6)~V/Leݧ}/>ה!im 6Пǖs9w[XhWr^lbL!i7m$,l#t 1bC\rp)>;K8z3`2٢ʄPPD'B0ڄCr^ۡi>G䆐Xn{짭]_2UxK+{+FѦt2T gn#QW:n6zu[.u~7<㑋_vʷ_|-Õ<Wpg7{oy9Sg_|˅N0RbcSsனvE꘧t N3?77x/<~6 &ɢ@yБs*`  u8Dfs)Sͧ/A7>>0ǚtTZ V;dȰ2 !jȰ2 !jȰ2 Y,mjڀPʜ](PhPZTW1N :q)kx6Q$\|(+r^QY0 S/!u|}LM { -OǧW?[;2jbhR]DT)K2B%3TQ3&j/]e[A *9&b G85^3 <$\z{$dŧ*nE%_y ?^n5^ζZc|Ÿ~{j*ﰣ@&f"@&u;$͘kEz+4ml-)CV+RRȡMG;R ؜4:HY/jmCOfX-l63-r+W=;c^_l@/>N^6AҪd p3FxFΘuQaAv# m;kh kd/$%4Uj6d HVd s;|QPy) [flbP̣AfǾV[7zi4hրxbꙭ*g)ltQ(66ZJS{X@ѡfȌ %0&-ۚ#m1LT 7g=I}09McǾ[D,`7vaʳ=fJ%!1a wv]T&YtQ)< EI!AK_:ΕjgXA6H#*}@F1ԺZ3q[Odԑtf|L{fɾv58.nDYe鄌VWd ^s=;>IWӝO dxx48pOLXZ,/ jl )ѣt/nw ۵?}΂OiܢnBKZv!w*-(MCB*8@HytpDplBR$Wd E)SPRj|r5 utx2 Q;{6ځ<QF]۝7kw4NG'fb/RF>)Æ-cvM|GWCkJrM=7[OvF2JR.YXSGJ'T r4uY=9G׶(>VEC>$[KJRs%Al]Ů8;7//O{4[KQ%9et#jd+s7?[ocN=NN)W\HJ%Jb "%1BIdu.gųdUƲ ISh 9KD`/ \I|sr̉eIl&ΎkrL"sMɚT7坖QY tɠlĭlP>QC*P"uXIR蔭TWP>)rdvvVr;pޞ21=AF'f3I *Ow':{>K8C/&'0ʃ هZz'O~?㤵pXiIla+֋^2#4KV2؄z/+,w~-;y;I١zhv3 jqv4ˊ\]Oә}7,?ѨmGG<Gy5^z9a<͑?N~ħ9]vCl:߯Z486`(u]jx48~e=u'kII%* ) 4'9<"C‚" "c|n[~P}GY-=cM ]=8|,6jW2I>fq%[;ٺ8dxiGyf傠پvNG9.j:#1 hG l1Vڠt¨@!muå(R%0 e 0M.rɖ6'q}O0P}',F L;UPѪ0pw0detBa,l2څAy(B34;kcଠ3#&oRry)AMhvSFD*@sd(V.%E#GQD!2yKbXSA)H[$jP6SLI!ջmFJk =LXw|i (s icTXKkL-tb -bް='3|A5`+z5|.3!foq~:|M 7δ`t3^{h MYM C(ƙ2=zc™{g($YrQRB¤ɀ.)P)H@E#Li%'1KP>IWe1I$OeoJc >&)p&@0Һ+qv#BrϖW/󬖮{zs|.u+z{jz39_PT[W-j]>^56ǫj~jTTv.9K0PǃPc(ԱGhjbiKAp-Z[DaR":#ђ% 3%X Kw^JYKi#BdgZm&vDYm1efbL^_NK^wy' GҞ~\ⷕ Q$'H pBb I@#r<;QyxU<mږF={cMF*W@dg u0{hm jBD|KrZ/!RW\Y3qvԳ^5'&*> dk-Ef'om6R R)0{LT2  -Uy3t8U^lnhоX#( JU8}Z͠](ah wmmɩUS^ۻv$DcmE] QdSA$g lWp*½FQ)7M-mi%dNZڳ\c$V؆E3CJZ[6*U&KAX`:7H cID$W Țw)Lrg F%85,fNiE:D+G8I<ʹg҇۶S^E\^.#@%%%SJy 3âҌ;d)p0 Ae 7qwǦh<Ihw֭2 Q512 ;Yq=ce"az$V*̡xb+DV3FQDƌY)ՑIY8#ޚD鎂tاEL(bR,R_(؝`Pb6*o#Z)'|q(vm5c#ЀHTq?\hP -EDâ+4I]\qoS$ ̸0`ojȥ=SǞ"ȟ{}ۍ@墮L֓;PGOΞӮnHw7uv,o@}Lk3f1hqh4ts zˆVilhe$ú}1te+)"6k1)솛 ߛ&f'7G/x2׿:z&ߟy V ,aEV&ׇ~W5vMۢkj·wC0~Ϫ8QoЎzmgbT5נ۟|l6]:L|KlӦ2Ϩ*5ULۤʝZ(!gtŅ_qwz{!F5qtDoQyڄ iUt$R4,`F0GEϔA;ij#7ԧIj c"RE({URp7T G OY1jZhd(;٩x+띯וo/U90a.xI8ՑhNsll[`yHF@1gjh]ߛ\20īP(vZކ" CG0).-aM6AEqjZG<'o'&U[PnoH7GkXD^fWw55S]3!&ׇ 3//f\ذ-UbVӶYf>z_#BbGEⶪ4c0xOz9-J-AEoCl=Į㟫&E0ε&4BE$qs LdJI\˝4%HFe-K%RzsH73g`Qh%+\o?~/BO4GI?7»Dϫ}`lKM~#Lfa*Fpp:=`AoP Fq~ya*ԶQϩ䢦L ^ hZu9k{cm jH K;Rp: ]5hwumz8?$6~kݰv?7B|:Km=[4(\X'U|AESul(R^NpQ dn,ΦTr{W}zgMMʍUm%e6I>B0% "ZpV!HIḇ*r;qV~Ӊv'6Reu^3Q61j!ĽwT8n UF,B0# be}0z)#DĀ5  I=뽑e:+Y(}uZͥ'mw{QBc/_q^ޯmTcNjۓ7EKZӴ{μr 4 #Q4B]?÷PJ NҴo9 v&F MjJ* K3BXHh&CD{fhpNJ7hb  aFU)iOΦ];nבt6Iڻ)媃5y1ɻ2|rŃvXH(ՂyM3x@XPLڲ-)qs%YxʼnϪSdiލJz+ 6:jc1o8.&01 eӋIqǗAsEqa>Jf()"*̌`01S\0=&qPa&;CB_ EӢCf?01tQw2Fva;ٓԇOjt}jކWKEUA9μ1fb Ti,^٠8# e&Y]߫RR_~LS1ElKƅ*$#`:,~,Fab9O.kuѸӔqGǎSŰ3Zs ,uq 0>r j fc*4͔QULz_iz$?NGΌ~.SNS30}<-}k5M?>6|%?osZNR<I옪?N< |> WQ-#_uLlJ*( ?$Ñvi208OV(/SX7~\.Nje|7)7f\1mIU㇔֔bs cqA#X g5W)~ղqP_5 M5E8-dLzsҢL1)yR-hQ':Wlϸ2]5Wi^ , tݲ{BhUW3?`3E,~Al~X}4ܴkUʫG]Ym}gT'?NqB+3o'r흭*P~QOiS.A%TLlY)7Pre7Z~wh6n67Җfѻ6__5|d2nGcVngu-76~;]1-.U&o\96갾ŝy0F Gd$G~Y\r O(~=1/"/Մ*g#VWPF\,AQ :J-C%_X2zzC7xvH \%O2$`I"K-7tpzc$J8rE{{!}w/U; ? qQJWICJ?!` Xml u`D.UO"QŮC$*7!{307 A J]Ŋ3ĥw3d5dd!ߓG܇ ]'-Ϧ#XG'<03w,{Bx7!`LN1DEU֧Q\ 1K$b"Xoz `qĉ"BL1jveZ"o8z<@-7H|gFˋDȕ7,/m]${n-j.j׊(p{H9vBh%G9 B9Wo8%$Áҍ\I[zGDJNKd@Tfi%1  H&nwwcP#P컘˻ º82)T<8Yq=c{/Ba6IJe"_M\ϵ4m#H Vp$$(ga S#=qF( Rt EL(bR,R_X{MDPhh냠à7gPLkFpaoB-DÍD(k@<&)XJ0T3ۻriԱ{61̯#"vX\r&SJhl8)8F"21DS70UNlP-@ Xs(85iatHp=I&ןQ?{F}Nx) f;A0yZڂmɑ$-vKl%EliDH">XuzGRysh zy]ZL{ {hjMtxvryvRṯ̌q9Q)reMS̛6V [(rTiӴt)r͝wדEpP͙[.gMrs۷ E8_{G@HKM֚@,鲭^ ',O16ْ7ٿOV5w##k5֖jJm8o= >|~uv^^Kްٛx`K%5/vp8~:ON?ޟ eO>V/0iGc{ICݟtk53zzaپ >RT2*D#rG![W/M<۩(%&ZZCrX "j0NY&x a $@ JS8t^UO?qa=?)kƉK"@ N+)Ha%S WT@bSdcg<=h%?+֢r#Ag>W( }{:.96gےBH@݆.R0d Q\Xdp*T^kfE4 X*p m?8,hEZS 1PK0;cBΊ F8Bdi-h=rAQ1{UYzwI ftqIi) sA*a4xI̓FGg\ =(/(SMSA{t6E>v:aWPܿqKs? qzD2lj_~fǣa#$da4Np^ԒIQ\KRg#U :i LB IB#Z?*lN`et[~YpwkypԺazޮއRQmzw> fjv׬3`(Cw^9KrPop4nP -B+qR} e8kdelHUeS @uU6H^Cvu#z9 #qEcg"qZíƟO?S~UTT T|0S#Piylνe$ܖB # Vq39$=  QU&c*mb1J&ijb (w[ %P$S69kE'cE1q6}c[؎VgܼSNZ&^(%(qEw Q$-wkLHڅhp`2Qf$w5)ٮgffq/Dƃ36 <3T È5i">*ەG|. ݝJ{${|;cEy ?o.NR<{u)>e%q09RA0LJa Oq͍N@sbD%rM‡󞏠ݓMH xm(PA1d7 oV~޶\/BZsi`ٌK*kH/qAt))'{5EJmM\E!Y"'D'33,btB' x "WƁ>PBZA= 14H6F /g=-_ /fO< IzY6"'\%c nQapHOCՉ3LEp6ʼnu۷-AiA ǩm}NLTR>A4*Yd<D:# $-zJ)W8GF DDd2PAR!J%.dE@,Q;D d7hbpC~0~` 3--A[KgZdH(`ڡm\{ק ,XDv`ѩTW ^IN9*h.$=N?A$= _|!eixncbr` UY!r_|bGj2aVXl3>PYSf#M?#>%w[J'w6PFHe8u`(1S!Ӕ0T$52J1k`u4[mMht*'UĖ(%gyӧ68bWw{crNn|&fsB`!4DWkVPPW R8R-obJw4lBƔ OThHZs]#h>HRc|^/]hҡZT>2bAr_Yd%7Бo}B`K$&$HǑT(:Q#e¸uHHmo!Z橏hάt"8'j1qOx:",1?[qr'9Y|-*yp2V#~u(V`E- Ls؅X$H:*F μë@4L;CH\ m!J!$8S'zP!ǐ}X[Jo0*Қ8x*ta.deu!.ܩ.wrqK憁F7c?5YM^\ ^ǓW%@KI,XVm5"ֈiN#<GE \;A -iPʤc1G&"Di} ȍNMs.e=-GɔQRţ>'H OgFzS;ԋY}Z,6JՋ^^0,RMSD cԐpG'p#s2Ahʬ)E: 78Vp˽i^6Jg ~|&G{?֦imm&g_.Et$Y+yam_V&kp,wߑV=tl/#T$ӎE^%yA6rFQ kfL|p)^wL%vtx2@xu$wIc2hU p_(ԿGM\^yoǟvE// v/!+z'Oؿq[ig#&dc ΋Z2I8Jk94L8<qx?52;8lOSLx![Vxo  6 IxM'ϻYv7|={njv>4ꧏE^靏ه5z47+ܢ^Bò)ݦ~ػ^^_P/zS׃n饔 ܦP(>0T`!+@\iRpBBz|ŧS?VpC`"HI7;:9 "4M9~Q&*"N˲'rKP}D)t<?έr_X`L4oE a7ɾL]qێ}N2U:mt,[<}T?VQbnF J 0ENKt5wǒ#RAB !/' )߳չSL u*uh=;yRV-[4C(9@8*+wDC.Fh#7{ӫi%ǹjhb,5? ͊yM]қ!{KnΌ7\rXieQH5ݒs yP9ږj_[[r#g RG_R jjgk/,H1U\mDN.+YqqdnS!0 ' {pzn鲹S3^Aȣ ,)46J(ˊe UHFOhV:MqR^|AWEܨYM#:UyEcXqe%(Z\pmqqU/gO+wwxHr??SΔ@T%be5ۘMer{%dc6E)*S6ފi韛 L ѺȡO<}8jlmEK tCP TrȲrRȧ9 xcU,lr0#3E+Xt ]d۔?Iu$xvCB78瑾{(Ñ _8,3f)Pd{# ـ< 5?ex<֌N8rCn(Oni XتU]K(T'Hdιȭ-SeYZ`eY STeK_WVJW)o+BV_cWKN4;@ QLޟo{ \yf= ?~Niէhi{{ݮxlrLDޑG}f4bnhk%uVhgV &JQru\^yS.ו654hcK:SvJ27]6tѶ*FϦPbV he!=EL]djdh49;GK^V mߜnη>P_6ik^5%9BN@r}SB^+y VɩVRi2x7+ h6i= <`Rg^gQmo&oTgW=R\~!ir\eFWUQUcJ>iH>Sa.J ((B<¡,[䔌0Fj] .7%m#miue) =?@*!/5DP6F4((e!{_6 (Nflq49D3B-ýE^U/y}gurHmԭo||$sӂ`#*Â)c-U*;RT|gwh4:"\90BhprWV :X Ws ٙ:.awOiyfOێ7zW52 Ǡ 7⇿}fqyZ_Z$ dReZv)TYov{%*W'mCEWYU}{}V-]/r2̧YwFu?#7m(?f9p140kt.f8K-$5]\vV2m7O..붫mxg<'9ʁ,"kT2E!uܬ^P29kGc`k1Xn<>+1Xc@ DŽ+LC*\\)cU:XNz=R=M4JuWlzF'/_]kկߴofI^]m$|stϹܾḩ|meÅ[aO&.S?LwpvJaqV J=j)mDbG+/N+Vk1t\Jf+eW$D+e HT-f+m""\A},"ܾq*O!4F+ Hz0JV%&\W :%# v"\\R;Sz(j>΂pEX:+V]Y@'\W*ongKg%GyY:!~k]^/?)fnŴu97ef 4UY0W&2G~8;3u:1v |yu>KN\rET՗f_oQ}c/:Rt %6E+Ś 1CI91:^>_]g|l16"ZSsh&HySDWٯܨ}ww4+{&+;~m =:(Z|/BSji̔ (*\߱G[<4W?w|uOGTԅl=Q{%:sW !#M'zG5Y}Ef!EL:ڐYQkm~P=L^M%]W >*fŝN9~*F >.n?xyPqyX|Um\YmŻ79ߝ:VtNٝ!.f }_/V}{RT 'a䭟rx.On1݆m"{j8oSj.Ы'nIi|{~}ZmwCF?3xq^ϏhbmS-mږ_'{&?7ǛvY#AS*zZZ4;^zC|_=-"[P{ FygYrsˀ#F|p.X &}'ÓZTyW)3}[/UZFg4xJ@x~Y-_`TH^tOK@ L\`cU*0=CLs[pܩzV=sQvQN(ϛ46W!;/ UL ,5ײxVpXm%I6͵f5=MoB* Ʃq5H;LWT WfLաR*%"  Xz WR p5C\)}7器";Lz!B Wsĕۘ+ Y.F]Z>t\Jf+#=D+]<2 Dl"4 $\WjW,wDaj}Km qe6G+Q0rł+R R+Vt q# 2\\) YU>K\!ȌzABݱKITJ1U1=ͿiKz] ЩvACTO#:cdXNU&(:#FB4U:jzt*;|l>j)!$DWł+V0t\J+T'%3%BKtޡV($Z:=Ov9LYF12:&KfI1X-:Uc@pBƴY[ XWpHL ίWRj9} vuz"z7 WvlաV G+e,b`Ct.jR3;iU>\ZBp5C\i'płE+k YTNLz\ҩpE ,+O0jMkW&\W(X'dƂ+V"t\JYzW$'/Ls<}:XJl*!"\k HV:X W3 duDⳄD<˕&\کjSp5K\TMD"ă+ .\ZBijfp`0JĂ+V쬷&\WNZ2"\`Phpj W2jFb>Zf!I?Vr:}Y1%RD&R* Ym^%4)#9O?iۥ:r?<2 "ckY ~$k}O% 6bqʉߏ Sk&7L%YOmwwL_p;O"K7O'b"کnrjZ8\ŏG7:n7sNԩrIM{P^=(U:%_OO|m#ʋ8/G@mO%Fk ?c۲%P.VE^~{ǍV|;},G<0Tw?f.'W;l/֭Y7vj~=wwmGeJ@MXڭ8lmkW"{Ò.%S7Y"lex\>}z5wuoGd7O~YP>9Udhi8E 4?O?gb^}"~X_cw9rty/Wrs iӖ}.ooT-{2ShF՘=a )cT8>\uFwVUu<ϰv&6%~Xhn\}" TĚHz`v6U&![-sbLD1ҁݜh}k5D*@U2,F$Fèw2R9Eg&bj ~jE5K[[jn\u g()3 4b@~._훋TO.`8 ,s.̱f'G}j/I4W-wvZxN5bU71HGI+*ۯYG[M1ɇ%]DDNE[RHqu~0Z#M‚ڈj`~yHVհ9ns3lm6m'r5ʓ`OZ)Vէ@sZu)jԍ:쐒cfC)$X ufIQ(!Q!S"kCu:v4&GxHh.2rN3jSؽjؠA[w Z y9xC9ӦG2~0P8jeWTHc1ltp DWX$NCqaj,ag+3\7˦ A@p_9 Vq`kޘuPPPB ׎]SPP|I']X\l<5m54FRĊ ̆vmBCl\+`AVPꛢDd(M TBM'Idd0*zbrze ++eq2FN[@h iNBB("2m ؙ\mѪPFtѷ=t,r84iZȠ 0(crf3RUǎYdFǘ@Qc it"B#Nȿh>`V;ΗL<,Wm\mezs>D^FtjA#5.0Amz`-dxs:p4(6B[0uJ`"gIW=$B+ut2: Ρ&i`Bgąs՜ (DMʫ 70iFih^Bd@Mƛւ +w6UAi YVt@; ZfSZش =W&= M F9`%#p"{j̚h=MGPrm⊕d 7 NCiކ`ݬ1jŀrvi" J.YyЬI4M]hEWKB4.?!t+kp N)sOAAh85AۣyΆAԒ]ZdXYj4b Mn 19{wޟm~nK-sNi:T?q ,'qw[K9[htq1s~H&|f Ǽeqf+nlΞ=#Ol\_DZRq޶W -Y>; c;88߷m8wc]m3:9ޓ:fh*,53 O"1ad(S1$> H|@$> H|@$> H|@$> H|@$> H|@$> 5|@zPS9uWAH|@$> H|@$> H|@$> H|@$> H|@$> H|@|@|@pƶ̣1z> ?+$> H|@$> H|@$> H|@$> H|@$> H|@$> =UNdw<>06-(z>q\$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@z*>ֳ﫱^yq|5Q׷G?~.Vۇ[@H Gd[>ҀkԱؖZ6ݶ4PZ-`[`鵂GDWoՀKGCWvJ\ѕAGjhj Xj{Eӕ'"Up昚AG  ] @S+5VK϶^ =w\mח>k}q]CCߎ=vB}MOUNOꢮexѶeguVr89$ø.[!p+W ׫1H&voV|}A?a,f@_)xo!&kiwݖ4/OSǝD;c>s'~l\vȴ{:Ͼ(pd|~@~(c 4Vp|@k@h'Y8p4t5p,t5:zt5Pz+tSp]g:p- ] t5Pz {tMsIFu^>rxOgoۏ|sEy5gn zq_Z1XR3NLp;p4}= T}Iv7zVHx!mp7bj]obllZ/0jkAAoz X뇖_{a/+ểUT5 D?<E7_2y*;_/6߼O׉s8yHR. oo9V*ZcSt˘j_ Lnece7l{5~<$MXTԇ%YQ'te觻4вC4~HlVXU,V>yg]i羟wfHڤ X5g;ՍFԞ2JZfK6>:]f?QGZ!+]4LDWXpn ]eq8]et qNj]eOhZ]eheUFhKWϐ8W42`up*Pw(t VDWnp 4&tQ23+4Dsݜ ӦUF (iҕ0F7w%9*}]G}tQr3+& Tc*)tѪUFەgIWQu' ;E]i:m֌HuM|clW W7B1]VVM'm[-EV'0J@Z +ZMd4F,>ƸD 4Dj #ir)CW.4f9tQ*շBW@vkz ]"?>]ˎKW{=Rl(^ t5{4=D\={~p%m ]eLԝ2J-]=CbseDWi ]e1tт;]eBt Kl]!`=UEW.UF+kOWjHW ʀMs|UF{DY-]= ]  D*p%i ]e֝agHW DWw1t=){Oڴt '`puc *NW%m箞%]VVks)1]F[ҷlE٣ܚ*xS`ڳ`= cLnin jIz5uh+IYkP@=ӦJG+mJ)EȂfbUX/3q[?-ӸHw}=J҉Ŕmid /0T`$dv,Zy '$>|RϞ|ŬiurCkS`oM,?y3 e7}7|˅ge<l1TȜ *&XQP=OE >|?3o˄ ,?{2?ӓ  ev5)S3Dhd:47$y8-ddoS{kdDZd/K"?eB唗Z!}6oyF:rB cE0*'S]R#ɝ"ˬ~;c]Ct`h`{#cԂTyrB@kUe<ωz>rcZU ')&)I2\^I`R2'Ojf?Ά7wa]*u2٥tզ]yڽAznWp4J2* h@d# 8X(>ZE1ŁQ.FlIv{!NYgpjU~]9.W Z1^9c7&@^c $)e:$WؓĚM8O[knFMG,u1ʺCWl||n|xqEh8#O v $ u3AI.Ӛ7aTc^\:*!:9NNy`z +EJ1e/Tc}٧p._WernF0na~1g (>]0i,dWhsFs㥉)M< ډ*<\$J_O(n6k#!CZ)RJ04E2iojEB !JD@޿R pc١⏃8 nѰouXw:jjQ1&Ҵ:VBEѰFj*n@@Nh{S M˩B# yjH :8 s@yϴ `{zkH{C.q[G3=DQu')sRw8 όIg ߋ)"2I|Y9a|a}iw|w{eI9TU{} P 'sÙ_]ȓSQB9a >wx]; wckIYG'$:N(ORX WX!=qUcEq0)òv> f[>ǫ΅-d^KX?,~[z|zz=;>Tp)TГI;Y.T4%%`YQmc!qZJ=ԇZowfiFu/6|u8t6[nvei&gv~lɉ5's:ߖ ۞ '[Y,oC K̵Guݹ. --eՔpu]q^ш  >? G-O1WڥlaD% fw| W/U~~L|חG?c Lǁ Co/=k^Y֘"ka ![m>h|Kcki@2 jqYG:XAv:af~Uϓ(Qt硤aB`ۼpy ^G~&6Irأ.3xRbґ|!R9K,5't,q $$.)V*GD(=ÂjxuzIE-L\3N\bpZIF +AhRH[YJcTRN% ;:û>Ɵ7z9|ijcݰG;e!Wr~;wy=]jix!:՝uHRE:,PJn*0:F %&2d3BDN Hu!T n)lk & ٮ~8퇒]#1Η8L|4EXo-*@u=X|ZOx~Nu1cya{X#f9\;_% 3^ -,A/%֯lT[GZx{43DBtJDJF0hQj,)M2y&|obX^ 0kku,} '%)Nd).J*ꆌ}EK>c,}gMRȱ3Bt~ܚ]b;,j6|2,Op7, et KUq LbɁqGDzye7<5<4g-fyy7ZY})!JFZ9 O!Cr>'.b^b_d)pè7zxq`>-Ye9]*'#'"#aOjNP\"'&,XմyH[e-p(`hPE=. 5,!+F(KЉfVX_n Jr"+-sXM&ZbGŽC ;3 ~ F*m7"+׃d=g x8Z~N'JXǚ?>9o?p|Eu-!;dGqw 7a4\AT"0 BP6?]A6i!OD/ub apF擳zF-Vw^rR琼 s0J/4a-ȠegGdgݜ ^3əԦ-m H7*KHMqvdr$r>U7^ldŌb\^SkӜIY#H_!$Z+_g[yt9Φ kphg4%ƚZ>/do)eruK|n\ XܰĽNe"D~+?[Μy执komE,jN >sNfˢZ1ȲAJ 3*0_?;#"*mU q3ɸAYDي0BڄRy}^v%eo2C~C_ >w\M=z".{EG~3uEٍn2x~5!52iz|'Vv8)L3 ] ҥg}M$g']rtM¥( %swimɓgɗr]1چs ^uolƠ/<;6;xW? nx Û엫u/h4.&ӫ-fYƫ;z1AgoXqo4# %iZۿWJK- OFaVŃMI4XlQ T ':O+'>ӃY2L{D1Gs@ 2dSv$[3>QESߴ9xFWUSin=={@*4@*˷i|N 8 eo$:8) c8b>.+>Wz+a׃+1MZl{u #쏷TSLFz-d5Ry ^y&B>Q1]e<؞D$xR)I& pxB$ɬJp18&y U%И2EbR”2-w)E/9{ǥL\dd9r d `6 5(': E%!hXcmIp>Wie1l s ?U?6P5Ǜ..oy@ތύY4XlJI":! QDcޜ%؊ ٜ{,C 5RAKH2Df2{6)%1g`Vu.&%o]DY`L)0j$oY)YԝYVaW/&vFR#;FK b%kH2$oi%tQk|ݶņrһգɶ!2=SQ٢(!r<\rFf´PpL "HCoM$hzt%HS+ AIS7) , g`yJAsL'iFcI )^m2!I7_tYuuY}9bb:$XH1pNRo)"6ӡì"wj֫v`4qgc1#U"df$ ibaI(.2='@=MQ* u:6Mpl G,\:k֡a1_~n0<;F 6Tɘxі GKq<~iƥ5=&߸Md!9`,9>qLFLe<.9y> t&)Y2SX,c$͜gPhcڠ>y5Ĺķ,4@2Y;!`hP˥ R鍦5 erk Я1{ bА3^7R+=Z%Rr>:i \i=JP\C.xfMc! 6@D@~V|e+YZ 3܁r^+aL}?q%?,U0R=_~f$SEVN4~a1M42Ch\PhPJL= ؾXQ aAcI^h&)7A !3Qh1Jd퉽eXb6zLA)r46mf-Cs(Cl|~ڵ٩,8XBn,驅*cx@R ف n6(g('8&3p\eo=:9;D h岱m6Rfi@R3%v g}z\sOVzNM wz_;B1g>\Cg}+8<Š {ɡ8oȡ%ufzu(e# A+Vّh|p.Ig4\JR IˬT!rBY/ =Mnm5&x {J`ܷQ]'"=ru0hM Y)%_ffH.TĪ$3N0=ooBRْk/bBxHғ \{1̆ʊZM튚ޖ fJ4Mor=CBq 髨ҫSuף?"jwXQGa0$ P(*a"x: $'wn&ǘgVZY3pA'21;/6g&E5 cTmXM;ajx-um!-ܫ-VAa~^ENH>Ůݝ5hp4| gbk@dAxnD4ɠ(Dm5+UQ!g y/TVTx . XȒ8{+b6A21벐O9̂-v5q[l?hlP 6VG$Hu*k:VsF Ko+&P]YsG+?̺:2`l#3΃GᨓHJroV AMPv)ݍ̯C˘aF]9L̐yA&MX J$a"HE83Fȩ&cL8aԗ̠0cWDʈ="xo$9>g 6KtHxF֩lP& Θ@qBfds [0TR\dBm%ńyX"4T{[>uVӒ]qTEq{@ƽ6h Be9'C'*Hs .`I'Ҏ]V<Ulzւя/(,yգ'Gv6#MFZ)м{5QO@@BDPgZz |灐|!%$p /2Ds4h+T)hm JL 1R}fWY'lfm6 p/ǟ,Vws0}ߟ}|N҉y$Q) J "qC+\0)ևq KPSfGJua9-PوswAT6j*s(pEWEJ=\BB&`WȀ1~0pUĕp(pUEw*R|p2 lzU^!\i pઈ{8H vH}W WV ~337C[%wOWܴO'ǃ7OAw8_?$ax1o&塑FsD:N(}nwo%` gW=VYU{EJ;`V(b8jq>q:I8ZFr  x5g8t'|1 g4fw5,fQ~J}FEV~nlJbut̷p:wDBnV jQi+Wz}cFoHbc5Bpwܲ\I~H.QXKTĵ@}-Rnkb˩oS(?s AsZ~L?MK6џ~H߬R} סyJoKov<$_l'ljk˂UL >Nʍ'{`+zyZdճL8r|d1L53U;2nSj.?E_|gӆKek'ld̽rBr%6{c:D:,U2 g%Sn:\^6:oSo>^#>cjZ`RRֈ&P]掛%dٸLElTArqV1U^`DVww$o| ufb${my{_xd#Pz3vBg&i옓udF tu}' ~ׅbzB4bOTj'Y;x3E-KQJJaCUP:2oSW "R ҏuf*CV sk Yx 46G&l;h ~Ք-f^Wj<}J_HTsSJr)f4'ba6kXgjS26Ψ=W&2*:'j.gkľӈ=I<{1pFuYQ'd@ +d )=B*_sSxS TZ:msZ$/WV*},eҗTt1 m1:0=*&OGL=o$6gם3Nu.H&s#Jq ,x G-,CK=wFKN3o(:(IBT$O&IOmu+ ,#xpZrCyQ :IoVnD42̩s: )C j#믿'ZB*٦hۥqY/U{Eh9 "c"kr71p@ŒyyhҽVuς{;;RJL cd VElNFg%(i" {nmDzZH,ȉ~R{4<+L2B 0MJ( 8[YkW3kurraKۘ@IdBa wE7VE&cML?uTӐ]*[MЋyYX !M{H:ImؓHu@Bl2&Ie͓Ofn1}A➵ "WHaȍu4 Z J4 L}t_` R8ǔ8)2ѓ˨$&fɑt%9cN4BPʦ.q$^e61x>ҫ@;*fB6*Z\$_Ԯ8[H_Iӝ!emK :8xU Jҭ7>w,^+Npvu :Ǟ; >;eb>59jmcQQDhmN:!Ǜĉ*--Go,P0"*ږXܝB PWb.^+u8کX}%lm ӳCh&!:J8˓6I09s,ˈKJk9vR.ԵukK}Vuliڨ7O((Ӡln NxʚyEkBL.қ4-ѹnztMrnގƓqǶTnAo>r+GVIn{b}NuXt뜧aɎ/):,73v{v{eKUy⭆mmT泌pHuI'P.8k4? } NR*௰.87R>پp(q#4k`4 nh Xcd;OڑWy%n(>f=>f=>f=ڠcpvd 9 +墵"0 _oOQh6w[輥{ofN(ʱ.ѱ?>} c/?"{j/䩡5~}`rzMg~6@z&pH*5V!4h*urޟ닺=g/ا_O9p6p>SU#91xn8Zg}r&b]Y;;@z@rbc*RI#ye+@xiRR\Q[ @TX%32(tq%2&բwVT[hoZdn0=9Zǒt?(+~yGZe!ZC.4 #B4%J,%䙎g$Pty*+cT,q OZJdx.gK\BRXb Z\tRݸ{k'{tNGz-Sz9'&N|u%#̪WDm&vÍ \, EJ6&ƕdb48 ,QX":3-Nc=#׆lm4;Ȓ&G:G74>. x.;{s* ׯM\ SW"/Y&4.b2|φժG'ŵ+y\fWΏKg{ۺW{>0@pM{ov.08kI#Y~IZICsșy,MfE0)D0M̻Lb3#ɘ|o~yևW7qӠԅ0({{tB[x}bgg3Rw Suצ"01M{CePNgul2[6{9_a71Ie<ֳ"nMg&n&}esulGm;ui 5NF'?۞379ꍧkt~[@eVH >s-ddI2e7ƱlJ 5*0AlHÐLZr.,&@'*,d($B,YLhx*阄679ZvɢZlЦTS~U v,˭m:TEƄZlEѢ@2g2 ɜEi){#"Yf%VIH"m4M88wyœ9[Bb1Rv%s(&i#-`pnH"#AH(# 0Cý Se(b޷>n;%DO1a*hU:HRB$2Ȱ $?^[&I2NŠy&BIm3q06 A(kvg&:(|bY %6Z2 FJNaeLRpFS0~5IcG0I#5>0%t9~h.isS7 ztZEo^x~VFj]rpKލQ+rCf4,^"vE{)m!Ɣj9̯[ԷuS=y>ղqvR`-޹I>vs+.M:ɫS7YLYF  #lpefyodiZvz>ѳ1Ɨ^NO`GШIsr2RV>=:ͣrѴ;ǨS0SM<񏮳^W߾~o{s˫7]4)(C$X|4 ??#7CjCx%C+o0Ÿ- %7xw(-7&q1Qw2gwqῥ bBl J13]bn[N]k|8) G,;怩`D6RKƘEИ ԎN*ixGd>kYg!EF+Z9 1-2tikMvK{*:{hT'r4PhV%sםťΝ.;Uםqݱ:$_uvnv[g\d (,4{Q{(P, iۑ;VL@Emd!m(ש䝀J)7icLU.[|hj# [L,Gn&yo9jE9Yꄮl#V#gC/  dw ȝ_ljQ>o`.]{YmvM.y˶O? N;D$K$Uk]1և9A\l0ҺZW ,nxp6nWM~oHT0yj~NW7> x~:ɛ Tdһ)q^qP`ϛnz : iz@3ȳ'h3CA͟ -;dBS<yNNLQJYm4,h@X ©d8u@<ᱢ &UOnSҮ^)h|^d׵jĽsf61x%]62#OnbΣßx;4/NA/&Rj<9,i*?ڕd 8n&sݻwkdj #2 ̫o՝G]Cp&⇳ slPKWPk*@/mVz0Ȥ?IEry.ъak06FJpV7ܔTc`G}vԃkVuZY`t1oggѵ,pZK"8.ybndn"Hm^۴P.Kiُk7{si*JwӋd$o"U2H#q~Q;憹98*A䢥L3IQ $KE=9 dz}ϽqBf }P׉O Y! ޶Ɇv =k7ɋi|g7/n_׋яG_ۨE䧶|B͟ ˫ɬS/hѫu7gW]rQ&WHϿ;7 Nߔ-~gK ?8Ni{…;46YM`ؠd%3pG~r';خ^) Ǔi/4D&)7%R]V Z€ʐcrYFce! 12:'W . m5W1O hоlC6yhn̄ظӮFywpy'P͋dZRV ],a`9KA .XGl=u4ɪJ|!TґP\e Fz ~丸`ųV^Z08=AܚGX*XP2L6+N]c!@IJv+{x%#J%h|6EYԈteC 'FV(6px562c=: '@PQGX@BVJ,[2hBEIM* $6$mV$"pU0 95 -Y~7zp2kT-K,[%OZ§zgO#aٞk N"(PJ]#H+|hQI6"xԒHV@fhr{ʕ1K.HzA;bLk8W2)ښ95c=[.BQWr~Ѕ;/3*@ aijYoMAΦƳ/\ck@ǂ܈hq 8pKJ2y/TVT僲<RUe\LҨR) h&chu)`ƮFz1ƂjmYYk˃>X+Ѐ4F (%-\FVsJ 83%^1Ax#c!d 9251E4dgǬ" IXk W#g>lAuQǶ*kD8hăF\(qO3H͒B/t4 5YE:Q 2dh3 6N $Y.#\.z- $472ardń9^#~8MԞlj:qɶz+EuЋH"qdnF+Df$Y 􉬸 bwBN6*swNC[YnyV1Igq E%[w~|"GMV~,e|Nwh `?Lj]i ɴyWv)`!9B9B^8Br *_L3+t2)(c(3/Cg6Cbu_xdG֝u\C~y ٬O>}#ZmIk&ɔRAJ$@rH*Z1-!Üs]W n Kb2<lr>AI2 x Ev]9xauzny;"K6en&^v~ }3}֢U ,-p aW+ռȽ7>V=|![HMA؆(+ yfZiBbWCwOSAUQ!mQ(+eZsWde]`ܞ2,bߩ|-z/A;nrr Vh-c 8!<* } dд?I>svs=ԈoVD,}U?2h=9y>yӥj},AG! f8b VڝejjQ}}GWSӯ0.Dk Ĕ rIT ͒ @-K^gaؤn;mtY`Fu.jGv,SBdNKQYjAm~>=%7G X( -M#-BWM8풍9"߳ R_j(.sBH*+c-׮퐍1l iX:kV~%4'Zplw8{@a@/9IZPh1ycp85 ;LIω8KHCdGMJqലNa m X.1@xVuXr\#$1p2TRd@ %*Y2ދ9f_U+KR"ZW 9}r28w5] MbBd$}#K?t{ ?ʻO lս ɫ۟[1geKq?NWeYm$Mr.vKfd-XWYb_ǯ<[h|-_E01Q]g[{1]e}qF-羉p%:̫Oŗ̯#Z`1PtE|~arqMD{.Wx>m| d>dz6zc \5jziAUR \`+vF7p%/pդuj᪗ҏpʂזD`clް&-]&%pʡ aؕC{ WMZ]&%WO3ҷZYX d\M1WO 1=<|/+\h\:Qt@p`J廿ON/nқ28gLhs_f㸖^)u sr|-'1'>Yn^RLK'ux1C"ՂCȹAI\1qzQgb/.U ;T, w;Fyc\Ib+E"v$Aybkl嶀HA^*QtՅPQۊ"R2*騃>Uhu q0qn戂%7g-oܚ|y}Ӱ?%Sw6גOQ %)#QLPr(k(# LܝNUf cos2tV!rP6yLmvgꝫ4QHyfJSյ3&ʐ,B1p5` 2T@a:yP*P6O.:fM)$KhK(s)ǤedȨ-ԁy`3縼CR[iϕ^l$mBK<'MU?TuoZ|\#UvP|U>7U)LPh磩w)OX!Zp„CaB+ٚ"lj5R3jmGAg"R jR.S[5l]މ lY݊A`3u0 sQK'dUAn>̣FTz,w۳^qU&EAfEN*嶻)@1UzoƢ9V݀AiQ"tX'&&_ lJG筍``;L[YoW|f8.E˻R/lʒKQoŵеTITrUT8z0.NY+.^s ~t[Lz+fD8YlH7o$BszÜ Yy eR/[ࡏIRYj=3MJkm5F KZw6PsGR,o|<:3y?':Lwb>Ըʅ{sdkIx [EG4^Y?t2 D_&=%\Bj^K֑u+qm (˛RROܣ|,M>)ɹ̏2'O&WNOG]~moamn&yzCREڎ5ᴙ JH'< 26Uq%=3{JƤnwY~חe7u_H3CLcBl+g Z`Vh5GԢ(6T4>iYNT> F@kXp=Ej=N}fkY$3Ihwqqo^yX]3l3~?_{C_χ"7L@se]* Oi%*ܓ#L^e6)Cls[K,qͮ"_ߘ{L?]&d{NYO+d =oBHɲh.s羦7jw&e$U;Ih:B>37?[c՛4"?~csgyq2V9榹Eu)͐"ה_ `VDg"mxhS _QuIM=Eq=` :Xwl6f.Fݏ;&)/7\2A]=O崦3l6ޮǬ-qE}{bld)3%v*t37[JSꅧmt8Z!Ƶwrg i\ mP*eIR!I( ;Rd{g<)tUX+P<]ěM&ЖE#azn k|dA*9?8(2J/sO^)[[7Ĕ̷>o}½U{d}4PʑwZQ̏Ōf5+<,=qǺ(f`@- 50|(IƼG1.W-iV(v+Zzf[ƍ/o߮6 ި`c\(³u.(/,>H%BU.$Ѫ->E3EiYm\D"F24C.T tj[򯶂8%Fv'U3!j)}2̻`4hKh+9 &RDƅ@bX#s _Ծ?2Y2Db\I)&jR0k 6 ъR- BHJq* C!v~+?tu*DQ: @Fj*s@B(?i#? Y:)Ql1`.0i Hei?GE!qorx;ZcL6m8 ό-9>~>9QR|+TМo ߭)O6Nq8%$NMs@nS׺j5lS^y\x9u_Z4^Ź~[nvaYmN_|rqY2'jH |0l0m{Yd1=Ξy%W_;MV_>uvF]ruLFÑ]gaNszDa'Ga`Eymg١<_| O?W߾y}y?9y e:y%:D/#C<;C͋ ͆6ZwM.dܻr;=?_[{\S|rj"*e+A'w[ݱ7|g*XʟFh!3 9Z#ƐKosӚ$[k_9NJL:BH,@D`ɳM4 ! h`;"8so8T^HoGh>)kƉK"@ N+)Ha%S ST@\ȹ^9㉭tj(h4ozO۞Pyέή;Uםqvs36=l'5I9w{(9PqGڊhGA*gs:@L7Ut"OK%.m*j#"@[L4NS0]9CPK$I@mrpkCPda9bV7P=% /G ~S[s FmlеMۼ[ftݭ›w=4}`键 EJ$MfCh}{=>B2:z[mAڛ /;kvU{%z^hFfWuw|iM u8x:eu1;԰p+6*9YM7DGl^>F FitpM2B_DnlXsliٓ<'b臣@lr+u~5c?x}񬍚M^<yco "'P;fz0D~#(>\?Vnk{gqrP& G(M-,g A| }v܂gth꼉\kIh&Z^޶*N|Z9 05/mcn6AͮVm8d:[{T~ji^%T{9#kBnuVC/8u쿟FS2gO"x;~ C=]S G޹:j$F ̊hXqp|Hai+jCn2K\;6-U'YFv]]Z5  4\; |2 YOuN'盶g^Id6iX{yes=' 9W (7@(GSR$8Q9oL#AjͭBz#IbG5 |JyJ^-Q{d2}^l̲u#T%Бo}B`K$5K2HbBT(*2,'̀G4B$h~eD1.@)E/,Y/4{ף}ȬQO,h)3LDĦ.8A* >LB*!eHQQ x"^s{b]H"p1rJPgŸ/lFF3 .FYG1}d^]kȨK<x|>',I>PΣOZt=FW֛v}$pȽٳ-]|,r"YsT T È5i"ZiBlWvOcAUV 5MQnݦl~x wȾ%^@؋skY[qm$ aR sLxknu..K\ա߯I8;9LmjƧp y$wm#+g}&3.6gYt$ _5I%[K [&[dW:BӠ0sx^uKU s"ɍF>7[8D]`wtcy8_uWS{d=z5z5kz]531XL%O"B='JPLjCJb vԖihjJ2f(@jأQ H0eӭk9 M?n+JO ry/>8`sRH6T[, ٬nƨrRgG/6*1J9R\vT@5 h*(4F,:'AcpESA,0 Yam!e} 1*QKU~FRɹ/8#(ٷ+EuC'{+tv3_aN;f_l4C=f1,2F:(ƩPL~nEst vƦϙW֙>9`qYf`a>g~ZWU}|ƓWr}̫TY9OF^@_z. ]U)/ pzgBjV`ܲ+{H{ɱ6$Qs-9˝R 3|{V3Ho(9#Z6FR m - }HSp%%JҽD%G-\0p\zvd=:\mD0Ǐm]mF.yQfT=s8 pc%W@VW\B}Dj'+]A\[ɂࣔh|yV)t|8E]fA#!;,L:LZp!Fd)vyX-ġ<,a˞́C«NZȞyק:L&:Op-D*n>hGqpLRY,Վ[ , jr ,vF \~w~qP9 Wp6 %n)o5.9Yoގޖ9K)yޅ?>zn&&A֮3u[,q`*Xj)̨*(Ǚ:tSi&PqDO%YϱpMva| m#w~qjM<ѷ5[RT_&yŸ-|1f*&ݼN`/v,Udyyh%۳ޢ!PMmW]N;4L ȉR4gܦ (O+LNT.Jlsd7\;a"H#TdLG!1Ǵg;q-w2U%<c\qh%L(QkEέK Ȣ`;X+\$JH!Mk(o5Po|| 9}1FX7`{\ Ywe] .70} i97[]ulynw#r]noAADmW,F0lLc(1EZiqa孡߶zՐPJ{dc \Gkn\ZlνP]ޤww!_RlbȥVwpLP%+zf`8& Cce"b[DxDH|c -0|?`B)=:FEM5B9T $3BhdL<DcpR"! wK^pkU9k=B%.~.jv/}5Ep6npC?x<u}vXH(ՂyM) FbXDy&Xfg0\zVBO\߾\{.uxBr_П8:'O> 5l5!C_حr6u'Ex2 {t\Ѽ0q+LNPcfaBt~h)f<\[-m4j}g A,?S_oTRUY^ AQųA6jx-V`]/mOuqG]YͿ .Џh|Fce>/egLpjD%&)&e)5I'=OaZ$aQbEtB<[N}q- @ wco?VI|\^­gI_@ٿ7s2} ŭ@CBUϗ,ȤWIV$>wPXUle8y ,bW=?oA)KuS^?n=.zh ;޵ui;rgnxX_]r1ӴIEq1K*o{혇{Қʠ[r>0Ii'f F޳#\ b­q&j\\"yW{%6f= DDy󾚹z݊FPMj{I1ER 4`) PQj"NFޥ,hxA EN4~wv:=ʐU@$E`Gؒ-%ETa+ZV5ijͶ,OViM*S deP)q 2`A1&aDL뚍lF^J("ydjsFҎY | \KPnKȳ Y3{{/W[Qr:Qu9 {}:߽S_~W\>{Ų|sE33:0EMZ>| 2N$H#'A@(E rU 7_u8ojvb) !&fEwZI%HD0 Ae {AܦX00n i k% gi ;zآ(T<8a1Žj` R-? -/մV:I$-(TVDƌŁǔȤsDi,cnjYAg "6 "u!UQ S)J7qG8_V_B63} NDHݽ󥋁Hr!L3JaI'"iPи| j>Q󭈴ԋ:T:\bRv۲LF`d ba3L_SD!,])tnŹFfVsK%f:0m8/v7^ӀE֏njt>qA{^萓ҵd1J4ZخHh!-7gn4~uLwT\!j颫 f'˻TǓVGM.e%ϯ=ist1r=mnu]v ͻΊ=-e4,idְ( |T,'M6{z7 3懷?^~{sa޿7߽}M_hRḫ < >Gt^7 ՚M[/|vt}Whܮ]mZ/,>`Q6[nVMbţǣe'4D6#\^تCb? P_8ۙhx8k'ezpDԊu*|SF+VIPs̖Ap)kuvJ#KþN}^9U{9Ph^sםզrΝ.;ui\w*V⏔lr131g~hq1L (ms? >=2'4&q f/m^Σul((}uΉOz# T}Cu\R-dNj S%;:x1&霹 ,J%&9v6LC8cwuv֊NDEyY]ue,Nbj!i[G]>sa<<4:g ݁g7E^M E_\.н& bLm}<.|,[80jJ!TRp1l-}A=x ő;d5AZ6'%aMD \M/sgnx|bV.ys{f>\o%?dz#w LJ_H>N+O_~qU]BRH.ڞ)ԳJidY\ Ulx9S/&t$to"tUt! ]\zu_;_by5,]tf奅 ˔szyP{v;?/-]6P˳GsK\}gs8zOG^Iv| af5)xɧQB\{  ~/ъAjYAgUc]MIr Q2ߞH{ p ;pqv OD Ea#%,+)> &i` J xg"DX prU؞fyYeQF@Y^~kcz#eJ(Fg6&JE&̒y`c t2J\D tU[KI, 3pR༷1qY$9]΂#mR>@Hv#(bM ?^^oO>O7[~䩷-9yۤ[cCgXfUGkT*E I(Cθ eM;Tv`˶H ~u4TNdT$E#M6(2풌9lYǪfW߸V$Q'qh)X (e佐YjS @V` # C!{. *6A+Zب3q;flXx!eugq\AfǾZ*km`7օ@Z@U@e ⨖jYc`ogdK&HQh!0K]Fr$LȐ#I%0ApL%4s>fY RM*jٮ[R [c_4bF8hM3^Ls1aw /*ji":"eA.B347N۬4H2!a JkH&bu@PUֈպ]#~>K=sOjd_h*E9A/nDTfknV$Ih)|i'C'21 Ĝl dIA/B/'֚{?Paݻ67 n~|"GQ{?N>|O蟭,*mѧwhQ+/h?k$iD δC*9B #$#${I-Ȩ}!Z%MJ:rOPLq1RU@!Ph,-7>& Z@p\X% LfVFպ#Ի\N'_(/&uaTE2Yӝ _y.ˇ҉yŀ J #EBd[\qL iPaι+טAQ"%Їj%% \{.1";.B\;ۅxնO¼l5LG]!/77,fZt<7ۜ%񖁻>wҚ[~¿E_j$-$ zpY7mFZE YvTšOEUE䆷E8lC>.O`*ݐ-aḋԑVcΞe5q{@%&0Zl2pbx`J5CZ}!(Z3<~X<tx2 ;my" hNaIx3/+XJcAE,s! 7:7/QykjU3GxhkښzoMvFj9B˜ra.),(QX`b7U er- Ek91%c䁆3JhKQʆj!va`퍩}k'#bKHa^~k,O&6$+ViSd1D;2JS%D%7)I4;,M[$/;S>4Bj]J\5}5E8dR') JHKjmjّq߇ }KJ J)k`yLCTYO$ȵIMTBGҚfr IP:ڣBt]Mk$S!1'DΩ~t$fF#22r[_[ډ̮aX$ce"d[xB_l%h9vཧ%J둌MS({ Y}V$ׅ=p1h0>9e1vI׉Jf@z Zּh@РRF&N0f@Q^a0xi/L N&cोT@,q4D3`F=J wζX!F%L)hC䠄x%CJ+ dL,|U"1$t9YPc#-΂ab231hLAG/R;_+ }5M|B+*L V x< IFpŕQiZJ+'`CKm 0,'=#ǥyG_RȼF%is9d26%E&p +Nq Qv9' IMv*%$YۡF t7 :` sOdBXS?ȅ3Exjk񅧦?*H1 uBcFS i?T N7Qo֋$&kԎ']PI!tܗ ZX(fwRF"u}C%ػZq r:n_YRλAI!6BGguc6DQgL1 )lAw N\,rf ᬮ2{Uru3@6َ EjZwK8R;wYNV瞭| 7E|=)y,Y~8Ji|yǧ 1; HT#hDX 5CGf{ull/ciJ< +]$!WIG]7h @cYJ6#WBA=Ƙ4\;;[lǞ[O $+%n BB^u3s'X4s"o`jip+#3ȈгT&= ?odgYTOdqࣁo1\*_iY^̳8S1@[~;3FV/T?OƩ_vip)ZVo"X1E5wvrVȗvZj;*EqrQQ& G(~%g\"gOBTc\w|U], [ux! ܺ V;~׍{ȳeaZ+ ͠Z& m7q'U^@1~ʯU3Ӎ^-ƽ;Q3+c8c16Ϩ{&ZoȍmwԟwQ/ σo]k"9^\_:^d䡴Z*N&ie[B/Untʻlb/R["єvP K)8c Ob*k,3ՎFx:E DK VH {zEzyaJ6sW0~U2ۅUv6_۳_vzg]gMH\0s'{ bmkTkЇ,cpPRN.hKVރ ͱ郦:_SRq}.L %UP$(f@0'm(p*⴬7ɇ( iiQ%R#Ѹ!0XCrB8)TyQfKGD-GоR\e'v֣r\( $yZeD#TOAHAlGt#Iny7#IBB.ģVzVIpD {BAU,gqHT( Q)C%cTkxrV9kY%W~bV1j1F˻r>dD zV`F()#9I,zotZGjЈ@"pK/^CO1 푭*fp@%BD%IJIJR}@QD7ZA-?l7~R'I rEWkK[`^I(ɨd6 $n+fPY@bh5)AzR6صB[@:fZ+V2yc$66L 鐴f&+q!7V_DicRFvEjcM,|ӚVxqEh8#O* u3w4|Lkj3o(ÚP1cnwE:jMjA6޶\𖲺2C{\9d$Q SԨړM2@uRh=)K!·BʊJc.(iEDߤL?svjƂ;Q&m 6N@CT("CQĠ#i sFB(;{/BHq\ІHowDxrs(H9w%X#KcC*)g`6J:8vd6OH)å@3&TJ+XJzv#Yn;sJ4_"<\E\{g'UGe : -MF )px,B$r핗>:$3~jJ:Zc`|(c93OO(M'/X0PJ۝2Q"J4y#nR5ݤ`kIQHQz $I^X}TJN)3Ĥt ̈́R4S .pgՎseLE MuO.k]οۮ46WlrgU=6!j)}2̻`4ӄYpPF|'MLN@hI(Ed _u=;V|VkhޒB`LR'!i 6 q,JD(Q"e( wcPGjı7[akf58lRku "z#p @KB%QjJ7&_4#H>H,N EZԐ$uh8 s@yϴ `= g &=E!qpĝ8$eZjx:Xњ'Iz7b,x!⼟C,n~>wi 䐩ab`})F\,$GTИb?&)&8*Z%V}GPyb;me<9P T33 r{}"N?Aralrrm-IqzBD0nqAZlr@,sQ ADe:?u-bIqU2=[)*ʞLZy||4-,I*Ѱ=8k݅CNԔĝ_#hBbl*Xb|{VaW>TލN}brA9[\ٯNj/k|m$jGYwO/lg9BVĉZ9GZ5 [= '+YdO(!v̫-yӾ.`>f5|lr|22Iu\5%\:ȥTF~ P"9kcPPC%ㇵ:˅o.Oޟ}??Ogӿ}_pѣa8\E[6qN^h^8thas֓_c]_ÔW{_||5nж̶Vwcks=i/U520 b~5;D']`**ClV$^]=RG}e4,?h=߹t$7R7D*g"N%nu@Zxѳ]2Zf$ഒV: nƨd:٩>7r;{sΤsI`USJ♕XÍ4:MlrLy뜞u77-'i9:ʃ<7n8xf\TQhv8.w%c}Y渾@G"ܯ;gu5%(atDJJMd54rg28Cܠ'SX ֞d ePE\(O໣vEn7S l"o&XAâX6.rwMNd'}{Gα= 75\rVM/Lvn&-}ez8D:y3WK׷-3Xz!YZe[oA=!&/rfnqQ JXFLk^%t%'LuZn%&^Ϫ0^oAFeNyn-< rLZ 1U$-q.@.w4 L$xZĜSSRFT9Oj]jK둕Szt9D.~q╻ZV}6獠z pMgx[uN5%cJvކ<|?J:Wkt846$w xL-x1x w9G:cW[~~\Q9gZ{F_^ LQ2,]LA_;H;$%ˎ1&d:]XРc%}%[G-Jm%3)0D8"O`x`ZC_d'D#KRn|Ύl AHjJiˆ@A26 DŽY,a~:ݚ|;hHD{L+G <ܺ BJ%۪ĸ |I)(% ,^<O.>=kg@<؍V`̒B2`eQz+bɅPyo>SyYoEW x> xktVrfdf$Ǒ HH 5'S\CtȉD#+6h}5?2JG4-G áCۄ.6CJR / Otae~'{s>e/.}}jp/Qܱ;ZWʇY-`W$P>8 (xRӇ}ݻ]KL_],>߭Q:oF-#,)w W߿y|^ɜ.0yg7Uɑ\lԪb^]l4OgnBο.^ʥ[`F^?FFYt@şgKL?5H>6ZM_NU4Ӏ6_6O@HW|󍨪7r>гo+" Ow+A~4/w=o5@^f%'<_}JJG`_n˾xY&$WnUH86dS{dS^t[B6~fCK_z \NƷ՗"{Az~v6o918?hp7X87aUqWҋ.UBDtȣhTʻjK$D+" W }sT͍*SjV[oR£Ǫw[}(oICfeF;.( =qXV jVҠ Bs1;?2i!P!aN2#9x'=RmЕHcս@9tSngWDwϯ t[[wc'=Eķ[,t_f_w?=gO>ҭ(Z]k}m?sP 2FW̪-&]=͙,Zye0N7ow{F?wdmրrbD'+ANZ MSsq6_kZI}OD>⨸cq r%B)\J]trw+ɦE!!7`hcؔ۴IFᘭ=qv:j0۽dYfywޫҖrdL9y`RsLHTjY 0`V%ۑe8HS:#:GG,cJ2#=,h%*Pd8LWk-!pudN^BƢ\6ky݊/g[bD!xhBeDʕ^s%=))4;MW4KUCTW2ܧą`H5EEzTI#]&(3Ғ u8;:5AIS)IFYJ,0UZBD A &)AmK$c.2CiCuhbZtL+cC$sB+L̳6r)dى>S[踤[ډ̯&qXI D/`I,.]rK#'${f !/=R, $toUA;܁Z8vTrJbhnbZek8kV XOH>6&=x"#U2Ay%N#&b,L &E+*x%'T"f *XKJg EduIb3Y*o-ZU4VMUcuMB+QJd֎3A]4b(yd+ 6Y5ޠ6@^AY,Gc 1NД3A,Њ=_ =oJ֥:`#W(b4؆(xӆ&R٫zZ}+_螈3U6g9bϡlc>9|+8܈,c㢒\ҹtACrؐc"HԥSa@eHYr0,7\8-HL"ӑY3s jfM\Q}ʽK{ ۍwnDlŇnNM*P/OZ &YEFt)줐#,s,"8>Aayq ޕ\;JMNEG.OiQ؜JӋ%&̑ځٮwYO@K]^9Y{5dxmSt)_y,Y:f ;pugmpWfس '0ل8J J[rQhi ?(G&(EGaʎx ѹP[+JGW0&-" /CіE[@٦bZ.ѸGN‹l2UmU6q}J)PBvIXB,gd A TrIY 2=B< &RؔD Adv̺,dc,[t+h,Zc_ԖQ[=X Ԁ4FG ,hT֎ .:b9f:t3lb#1Wì1XŁ2YEG a &HFM#amBT&\MxؒEǂǾDqc&P! #sJ BQVUd8UB .2k!:' H: ΐE I'͍L6ܖv)PX;8;$6|H|Q|fɾh+pqō@[ƃ.YtBdHCc9\OLd%Ut'PY.|T9'.N5;CWsQ|R7 яLZ-G?N?OVY_WѫphpMjhcQ@&HdZzCB 8@H~tGT_QBE&2 m L傕Ym b,x2D[BY3=uƱkݴ Z<_>~IuhXL)6*zc N"h=B!bs `Rfȱ+'H6()t$pQ(*+q5q+ :[R=X)x]|A[e'״f|nߣZu=7֛oCqW^r׾C/aҙD`t Eؾ|;'/O e˓7'?eG1ML{ %?oߖYOZc+EKt'tAzJFIV2 )ҒYFgI_ޒӎ3|QSnϦ(\C=7 HkmX0m3NUwU_8g<,v}=OI-XHgqVSEۢH1'dj,u3,J?wï .m*g[m3O[;s_v[qb`ry2~˯+wZ[ҚT+kn7[t:]tÎw1j2m-1:W vsHX2NF\Îg-~6YQoV\6 Cr rOXbUܑɡsQٔ%lh*(>=# `AV4bhG ]kފ@F`!:o:M&` FPuO{*mdHeB_p6H":$KQ_N _%٣ɀʝ|5F9;r6)nUlmI9cPr alBb%zNFt&yT`<7>{cl21\,+eV lv|x5:JggDƒ[pqV\ie<r[TyV\WXATeG+:cAg? Ά~67囯|@K:G?qSn_ &=W_61{H&Ӵj^l{TjElr:mץk;eѴ7twc(Nk%gx]兘MnV63M^B>f-=R *yowPAǸM)O!tK  \H (&ll|eY7:vQ&j}>Rv>8w2.br\մ}=7oX Gv<ˍڏwOs)Or/dPBK VNRJoq!hbv| Xu,r_-\9Qdm.JE컸M_p69z؇z\Mu6nVH`mFmO]u xVwᮝ'ut ud6uɠ|݇~=Rk8*AwQec &9OhGG6jd6謩:]*dcWm"*D rYǛF8%cDQ g3,!8x9z/~RG2bS58jwW.LdI1j&y h k^r٧ p>QD9)9P|4䋛ҞI'u=D15@ЉmW َtΝ =R jtS<R;0:VzFNŬcޠֽK99x6Sn~8tѹ d颩"6MVgҙ33wgҙ;a_gkU.@LNtчl j䐋x\ g8cFN>+TJs>f 9&hGdk.;^DM]ޞ_݋|vteo'͗]uu֚z{i$U{Õj1^=^eDjD;| .&}CyJc{ s#+^_XJzUC]#]YpDt%-hJ:BW wC+A-Ntut彨ЯDO04f,L[y9߻J{?`f'O@?fחwfM;!v`fON^3e"nӳyCanpu.Z n%` =OÛ˛P`)-]8?%n}ڥF44~ ]m te&uQ9xJk+KBW:] Jx#+[~ѕJ ] \FCWu̓:FҚS0(5hU4jhJ':B"4)lَ5Y H{&:FbBvfDt% gE7jhtPDWHW-=#+lFZ=jhJPZʒES0 ]5f4K <]5~+EA?Cwx)D_wev;~lE1<+jlsj۽̛-|E{laܖr+H}WHE[ܫ='3{2Yea2=X-TGCWֱMC]#jNJ"̔WLߟXW"/~pӽlsZ$s3ԡZM% P; 'p/6TA˫ +2z EIϮcH͟t}zيѾmw䇐`+֏'XJ=rBb*'ӚFCW .UC] ޾ft`c=~+֚VhQPz30[ЕjסGvш ٣+BWV3td':FR5U|[c骡<ҕ6x?"wv+ڍR~t%(]!]#+l^@EWj;骡0o_4Ӿt%֣FCW UCJw+ccZjuXJ:`JUjDt%ЕXJ׮jىZɔ2Ç|GזQ/h--nq,ege+Y2+Z~LO^)=f0КK Ai|C cUF4_㱆] ޽G@~u.jvhJX!s]vztP>棭ҡ::] J&Jy/x_HW .UC{ۡ:FҞǥ`]5cP{%C9s]]ƌi+ka4kW -UC&: 'Uvܴ* vvf _A6ߣsnŅ__߯0|VŨZҠ.e~U ᄐjt]#%wpvvmg _旗Ҷp^^<\gy˖(Br`ޡ*\tz nڹ_}쾏 'qt;p ?Y5A ~߿tȜƊs"[~Qxʯw̧sݲ i8BY|qy}5<`a?Nϥy!i}ض>lyt|2]u5t񷛷A9EVjAtYE4ѓ Fm04 &9|oo|FZF% oIӋ\K0x&{*F,DXl6Md Z1 2\;h!!hQCBm@bb*e 3RMHDݺVL-UmRdK,^\֊X6TFd*9IIh;֖J U\tK2J:$ᑨfdo+1QhUlヲm 9]EΖjeN9*mle@KYU.YGQR\,y.HT'Ŵ{e[z B6,,X2(,r2^FǤjQD 5lM{oOX41'}X(yM1[1hJۍWm kRYTPcMk@WVgkFl8 Wj3FP^iG>YH&.$T&VG(,c`b$L'&7͵yUB %V2*J(<E0hCA5FHzA&gyO>+O>LJoG \i]&~%~VJ<_ C%de= LLj2X)D k/I,'E\{lEdւ–%מ)iGmpIAS3XC6a0f"0! "]zYJ%#N&I.9by B|SQ46zXd< S lDA;5+QQ"B%~DshS*^q8VA^@d'&ܺRnd-Zض3q`(]楫 ׻Zp v`c{{Y]`!00 ƛAFq:Ҫd :!JrAhRU! 0阊'8$;ŗ,J茸pΠZbM=A!%DB>hAjy1j"!%:]%@_>1&#TϺێSQA>ᡔ5K q[O κ$!H;V@@]BZx _3>D[vThň޲A+}Ƣ0%-@GصXոI35)JL T "Q8TDYUQõ`Qy0 !.@IY6 crĪMPk>Zl@VHGYtg#MGhTf%ݢZM.U{9 %0lӨ{ŷ%e SLЅ![[&gNqr#)6\Cnh4~flӤ/Em`@ۼ4w瘝\,W8ӛj9{\*.G"Ds:Ӷ{)t|{w߽7+lzNv{mu./u[/Wg=HL!z" $Lə{Ŕ\xBEԏ'= G t> '}@b> }@b> }@b> }@b> }@b> }@b>?Y̔|@F9i:Ǘڠ :y@' B(~> }@b> }@b> }@b> }@b> }@b> d}@ "b:> /N4((4> }@b> }@b> }@b> }@b> }@b> }@'"Ԅ|@v\h(mt> +}@b> }@b> }@b> }@b> }@b> }@b>S},ɍ^|%m5V7׷ݭ?~.fYY\aYomIZtlKdlK@kۖ$]b)ؖ0Nj[XtEp ]eLW'HW^+/z2tEp ]:~uzG`3]]!;oSh~>G+4<쇝Z}]oZnbάnM࿳W?bykEo18Tg̔Ys6kJp7bgtYܻZ%&~źgKLPgkWoow+oQ_P(rX3gi.|AG.v ݡaC٘^/WˣfF rqَSyʯN U!4ƻ8wҫBnﶰ|YΖ^\]5uKTR61T^g7Cև&MLq6ǡ+Z6*5(?j?,i5G7=Sa894B>f8a GL$Fy0 cSSE[1+;j=s΢Q ~2t ]Z'U̡KÓa}xb: nxV 4tuc!j<"՗dqBtE ]IS+B+ݱTJi]b2tEp ]Zkc:E &DWxB ptPjtute4NcBtE ]hBW@kѫ+Btute0~JKle ]\5vBk~P:tutqjJkWO$a2 h8zuE(a:A8T ҝutAkT~JtE(g:A B('D)MDVL|[ ;ŖKiau}5L~.-5Od߰m(z+KCWM F:DoPBPɶHO%rVD'1BNr>Uj*ꣿ>F(`q#Z3!"FMp'Lhvx"OCWH=5V kxG~C>-]>s@.[ty/՗ ޛ)&CWW5P:tutlJM+TІptE(շ+~ۨ0\TD%C%)ҕq3!"~:tEpdA ׮(# ҕu.8!"NOWS+Ḇ<0]}r.(&DW+&S wQx<]#0]}x?vj: )BW6}1Ѳ:E Bc~}Fd{2FJYQiNʣ{ԁ-tXcv#8C &bkTMsȊP3Nŏ3:]֦J'xqZ甛JfƻQ髩|ޮ>s|Hx*o~u=[_G9Ȫ4z)BWC)wK֥KܜL[ɾ9?$_^/r#>~y3wy7:?]ߴkT*l_yi#wsfiè{q3:68 HWAconcؾ/Ӕa1{fjMuazN&4kM] (IvRUGٷ[{SMuO(רҲ {(W;s.L :7IY7+ǚ٫ ԍK>jT)zm-aeAwf^[U`vZp((h\6 {]( 4z_;=] VC _wD]7Hbn?OR wݻL71#V ܶwr2d;+szzi{oO\T sh 4QJ6# YifQph^֤Rj1jI4SH!kmrrhQjF+\#Ai)MDW1hJ" mNRpub}6[6Y舜LėD( wOݨwpd}S;)B|&C"* ԣj5u[O9ONBjUiyc7I_,Z a4džr)65KO:e1=yne]{,/QK!S彔߼kW|Wo!ؼ~Vÿػ6vWT<Ja݀ͲnKקAQ2CJ4ZCkULfzfa3?bI.w_WbW.pwvMi>/l"tqѢKp}lu܇^0r- X};(S;)ޝ9чf (\-XyioZ-ډͳ]wg_6?:y#($w~-˳t1;\lݶ~'Uy>w ;z|A8)qua~Na@jS6>b\®) OO YD0C HȔ\ K%dp4REV0kSh_h 7ζj`11{ r{I%e98Yw9ͱD0\|WqQ.@ϖRp1EJ,X%dFF$GsE@2[7YI $CU2(Ȋ!YV ÂF{S`d)쓱XFcSE1 l3&ҭ2j @Mcb}}L=X$0\R%U+H6ވ%-YyRRU3&Ξ mcbp:&+|vϷAۏwm{BwI$WDGbmҊdCh9( oJ*N%gv<^]N־pB(wp|WLep`NYc؟Π&nЧҟIߤZa/X]^3$St1ȍkt^|_:ĿtarA!g<9TU= XSb,fѦت!TX%YH-M]s(%"XFK O9G P9meP[>d;C8{ƿt#FFSab tK2|1ԐudAtXtD9xSU]-U>MjQg`*ZdHS,Fuώs vyC-¢|ȳ#:nzrPcT߯7׳]^ ^3woovuAM!]\\-bI:% Gsvʈ9X]au>$4CdAX4pQM1H^( e cMԷʠKscP="LqON meBآf98BL3Tl1/BVꃶ,6Wc#$NTm56Gև":_\ kS_t_HQt,јXV}XL}#.)CmZ//'ҎCa9 { ~Ų\g' ~Y۫4$dWJ֎ N{հ)(>]!@= _ ~ѩ93CP\KaJPT rAg `rJ7LZmJ3Rkn MԂ>kƻdFY;(L:lyވP׺1'۬cSٮ=Su$p8i}eF?MXsU{񖉻w*D}ʽG3kubGbүJt_mDjI}-?}Z!9;t x{Ǣͧ\f]y]l!NWWȃ8Zv4 c7}Yl=9[IZRQ8-hqYѥr ;eC{Wk> (+Q+ Հ; r6e@Uxilꀁ\vX[僱!y5M'9VYj#@Em3hC0Q9ԵTh󶦠24Uȁ<AbܽHsYjFc)AZ'zGcJ %eVE,ac fhɓbWK(԰7MS@V\:ƒUQi>uTX0Lroc{H9a.Q.ʪ(ւ:n[Ѳ$Ԝ =o^RmGE\ b`BTV,%YAugO;[շVz)E2׎k .PSӍow2~FLj:-A?-Jbw-2KqW@%'$L&l|xB" jFs*DZT~샹5`73+ @{n0YH /k!Pix"`L>[R$ۈwȅ^>;|!fϬr o)HJj]UT#dyK6:dhRE餫E+?QBL@=Gubmz2ào=vӶYXU1A$#}SW-eVI6v.IH8hs4t襜otgJu`ſGdlMV&r< .8 T :$*⪠ SM DIP@`,:a}wnS֌wEw*yp;_i7 GSh* 8W& 5UXɥb-b"R$CoA֯N$kęU!ATHF{07"Ex&I1(8AbOLr I(X9(֒N2nLsՎ1WmS7/w3N#r9h8h-G \)30 fcX5s ȱ H78#?o'\q.=W"]^ٍ^>"!e͵ۉeQ%g8])0S-#ec(kixDGX֮L1p*E[L][oƒ+y9 v@g `dCaFKΌ7j6-ɦFflo_WuWWq)zvpdbiIDeW>:R℈)pZFe)A@Ypyֶ+le-ӯz=rr[֯ww9EMZ@Ih'aiE5!ZI<‘F@(I6R ux`ٮiB FDY^Tfi%10#  l pr@P:xkupFaȨRAσKSܧ,Z(,]u B?/0?ҴDr^o(K$a%AI<#kEdXX8Rt(e3ƈKDq"dC"El@E r'H(DZl"rR wD+E@\>F8rKM"x]?)Ӏ )pfFc?b gwE2`H;aM>B $"V{?ָ)Q —iag[ r%%csO\Q3Т<f2H>F`@ڔA#L9A'QL݉NõߟLg@'a][  85iatH4\̤\ #wQ9H(cqh'ddEPP}SIJв_՟V.MqG9?9~;OOV:&[T6qk,!b)J#lNUl)uCOeG*GK+TjC]FT.:MS#`ɭKYMbvMUx 2e=1\꛺~7ɇAInvx=!G.eG$^`F5EZgN"[y8WOqI2snk[ػCzEB Nf[u ?z࡭vC[Wz?Hhb}V$[]D?fin@6g_z sZ4g #V8dרSl:dtv;DFȘ$C0bL}ir''@d )ޮ󅧹UzFIoYShOV=PAS;FMɟE"A >E BB09 "Zp!Hby 7w+&< Leu^A3Q61j!ĽwTx*[OK:z~GBsNQ|đVry2i_KdzƛTbC)6n9wpϙW8AR\Ci GH- ;0^!j0 Ov ΀$M& `ib K` RCTR)t$8 Zoz%bfגK$﮶F5%$gSYl@oOăvXH(ՂyM3x@XPL-qsh x&+?PO# 1Fp%HmLQ51Ϙ7i6L8G(~ K3toyn8Ơ89z?kƌ"0Z 3<"9917N YAod]-GC6I(j[kۃlI^j=^kӸC4HW`p 1IAF V2K`5T9 ;#ffi%[K[c[֥ߜ-N?_|(_ '~%dc{1-.} ϳ?.heT2N ݻq+󗷓u}=D}C4wYfnivmn~?o(c5|1FϳoF] &`|. ~^C:۳2_9I}2˿j4,Ox]?MΞMd|Z?7٨NTM]=+E/sz/%/Kx(= ޙ?B8?֤{?NWrtÇbơ*aSclǦ("Պ8:Q=0zTO<_1򃥹~ %,F²45QٻlSPz. M \J?p`O˟3Ę}MNǠKǖMV_lceMӭ^8|'mlVJ?L{XO}!wHVL7乞f5ϊ ֯{QnGEgTV܆ nV9Ea:&jZfT鱹Zu ,y촻'md ;5/ݩZ^9A{3O4m6d En#VWPF\,AQ :J-Caǥi,g!1Y GG@ч"L#sY0rӷ eT* Ƙcf[pr+<@rFϵ,wJÃ&\ATn"}p5*Xob+t@H>'S~sQ 9>c)ґL}sܮ9%øsnWUW/x PSD*a(j:Rjsfȵ1O&zGYXHo>REA6\EðBc<еa$-1uIn q-Syf ),ZnqVD8վQm.wE-'/²?B* .iBAkX)ViDe]Q%UB+;OW ]!]1=+xWUBq Pr:Bh*zDW f+)B/t_J(W쁮@#TU+U_*ղt(c+5Gt1"UBKD*d]#])D^ ++Ǚlyy_\fg+')輆⤓ SNO OۯgolCks:/SOX`B/7AlEeEQ9ܳ)|\m;S qrW ~ ṫlR/o_wr/]A׃ }@7>UPΰΞ%@NW[=x9_k{ ҽ14|D/2FBr1>#l!᭏v,СwK톖n(W6:AWd"]1F |g7ZNW %]!]]` ڢWUB{(cP*6T#JEWFJh:]%ZtutŰZ7tU}v;Rၮ8D0f=块J:B3&{DW UX_*բt(9]!]IBuvCt2JhEre?p+ֺsgH&nO$_Smw7Az~lK07ٟǫj@bl_w+5`mFB"&ʜ9sP̹&hFqD5ccnN@N'PWFC(wJe,:u~ߋWS0N{ub=.R=d oåVz3r<y,1! &nu '1beP & qhg }ZyCRPǨi*Q]cp)Q]B;OW %!]}6tEzz IWOB-Jpe7ݗJ:BD{DW 0ŽɾUB+:wU]!] N=+,P\Jh:]%|P$IJ ]%\ZNW wutε79Ed,^&!lAX.^Id=od}ivR!Y|-< kK;ħЬ3\{<[߄V˕fY̖'QVZ$8׳L^8 `oYn}?&`-~PaÈLd@r%F+~S_.WBI\}?rez{FgMar5 wL{ZnFgD^;A"Wzbf̸I0r%&"WBJ(.rurr& $Wh}D/BrJC+u“aJpEٹ˕PYz$WƑ+#\ e\\E kA ^@W xOч~} _3eGOoڿ 73ݶ{͓O>xV;^ɫ#Mw@p! ;;V[\ԫ;=|#_vvtr?q0~>%M5v~"rc?*|GhQȀo]9> B6]˼9oxvsy~'A/RrqN:Ӽw.oH1<ޟmL]͵vORѷ  d8a̚F;Yٛ5a1kh8h V7/T_4\؏ q<\ m\tÑ]J{v4}Ft +R7ěnv9D71k[fs1W?p99צ,?nف._6_9!D0F{=b 7R }u&s{R},Fkg7v_Dvoo"Wy 00\׫aܕR\ 1\\٠- h\mG+zr%:@rɕ8kWqWBg@D\\qq(wQ)=N1(G+{re GM$W\(r%fŠPڥ #o߄vCDT>)Y:ew92}R7b<5#ƌ1~PRLj1*io8%av m (IE [}8Ÿk$`}wLIQ] ? *,rܡqtȕE6˕P.rureT3x&4`7\ ףȕƃIfn"WD,iGrWv #WkqWBf_ %/ʑs $WvqܕІ˕PF!ʕHr`Ƒ+0\v_{'REA~`VI vD5?ŅJtݮ'4mu+^x/_| ]"px UYuﭠWǯZM %^}??uIw)_w(<J]_V/o_m7ur[_}jK%:[}Tzo ]?d{ahB{OLǏxros1{ Q[s~A OǟXVJ]PΨ D2~~Fl ?k<`} ~/uwzOoorvR//_:|n_%,ǖ =dv7`jrTu&ѲQQ6L]琈 %c|=v=k!>i?"޴* 8A30P7'W&=QQ*Rm ={p[rLa9;JKbG,Uc6\P:l)4j̞Ѕ1*UeXiO9W]lr*3MMN6YGӝ oRPc#=[7SW|&g[-hs@J˻:*FZ0mь}'+S4)>kH &Fcr|sG)zwԬ ə '嚷5)cĜDh{mXz d0vͰf8:bj!SRg <ф`~ M&fJmgȆx9pѤ+C)玡(mrih*gR=^bzF*.## ! uqQELѾL7^_o >B8!1#3T28lcf' GԘdVnYTR9ΩTRU&ՙ!r=RNZQ }S/'.Dr"HDB0Ǒ֑5~B_ko6FTNv 8R$ZjXlpƸƵi=iԨDLfyRLV]qS>EmSO )(f3Hqa :`=KF Ȏ ў$I_Kԑ#Y*yˌk&,G %E)G^5\Tlpt-޳< ;ش͎#8m2~aDmUei*] Š˳-cMm̭L7b",N@b64g k.6neB0oȭ`ae,#sVrOPPB ׎]SpP6V*aSOMx_b ~kpjiWH% J͑Ul`.(\ݨw @l)JM[pB5XN8NH&a`vMN+G;ZfX57]\?78c-[Cݘo]0] PDdE![n;[V2|댺5g!xa1g# nQ!6K7۬/%JC%;s6>uyL\FXb8fdb'.,J0 LEwf%RI9n#,;Y<;"JPw/u$ 8ƭ+HޫQ=hN]LI(bI f0r YФBm!*-Cࠄ2z Yh ^! C9 V*КA6ԙOAc/fĥ*ʊYIŘ QUkDB1KPρ9͈;_8)on͉,km- Z013.(Am܆T00`C¡Ku@y"\n:ʪd &aJC2REbX|LEɎfXQ|ٝ¢Έ f՜`HDNW2ؐm #.nc[%$(AɣN1x$ۑ7|sN,TGת>/AXżsYۆjbQb$ Sa\^~zo/.g}4di*9rS u%V4XBh;K)G d16A#tϦASQ~54K s[O Φ$;V@@@'^BkkŴev F,t-C̓/ (DDdQ>fF+Yc7~3,FT$`fj%&XT=(FBjw7Ȳ5\>Cڰ2unjpFlUWH!(VD|? 3$+;YtgOGh&H*xSw(m VɯުjнQEx7ZFrVe""A+>MO,a+bV6@y38Wk@+T*]~M{{~}52IN <úILP3)-zl'ན*!bjЭh]k t/ y"<<z&f0cvrvwGE7fĞtd &% xKdrWSP/O(7"fh8w(D+9LwSzڢ?{Ƒe !lZ@0vc|q"LVՒH((qfw[Â,Xc'0rVz'JP}U O X :kctrb%Q FI^Dyf% 6򆂱"h IHQE V1QO+:HU .#XRYђd9'21ȍV蹞͙Zy)Acuԃ&JbCY*LN0H&mH@8B5W 5NOVq ~Lʔ?xQJ mʑny@6P <` $H3  -kr/\Xy/DrJ n=0ݨA%CXWғ(PXe(>&xZR0[s .( b7.:8pϢ.5@l0:zf9d$P$8{P`LE;]%ND40(B R5/*ˠߍrZz:[ ج Ed%E@K r~/F&H.uK.wbe朋4|㏏gK@cYEލf)l]f 9H7_pXsI?fU:_|=h4w6)ybe{JN' 9ڷ4_2V7)I4{_%ffͼpmݮwiڻuԱhk0[Yqy@uѾ|‚ ˺-\"}@,B TyF> }@B> }@B> }@B> }@B> }@B> }@:Q~*\].(9n{> voM}@B> }@B> }@B> }@B> }@B> }@BN0J|@;33> VP>S B> }@B> }@B> }@B> }@B> }@B> }@:](s]`Jth9z4}@k> }@B> }@B> }@B> }@B> }@B> }@BЩ/=^_{gRRfq}j//;'UPߌ(ն%Eahm 2mKR-mAqUv[U۝ ڷ J"]}5R2毝Jp~3媼~h%M;!$cGd_>'O?|٩ ,#i~(4Q՗/yIB). Vf:6%M&n}{uQ"#(C̔@{KAS?[!B\A:<*:Q5_kj` },CNaZT|qE{e"!m)~V^O V9g$ ]:~'wP7?ycɦf[ZQlԒ'Z*ÜLlrDqӥ,LQL8Z{QP9Ɵ"-9PBaK CWPVvGiUjKZt%x΀)՝UA+ٱ ] ]1+F53tp)]+@:wV!] ]y]qF)+Xrgo>z5tu:t%XΡCt5UK;]o`R2+Ɍ2Ct5'W5 J:IR8!Rt'*p8=]JE) ҕ抚.Axw \1XЪHW'HWX>v#2q.UV+~m.ڠyUǶ)<}Lv j:yЮNua뎝[`vd]$}n26hUdm$I9~Zxn:JE"qtP!y"V&-/~.m|g*5aїM.˅?܇=slEbE?. f~[|XB/L 2rH*UVIQ9R_?=\<*+B[ݞjdQ;6ZB3;_74=k7 }|/},M.~1禰UF߸W}~ MRE[qN*hr?&IO; %62I  ڨ<QJOuLyib|Zui_ Fp(8;r\:Nãpmk~L)7ؤLuj/iԛyEI|%yBwjx `zgm2j7Z1ȵA5 ~͏Mf=#޾у'72;2?3oUrv]rYŭ_@z1Iɠf($¢G3AgA+W =Fكo%܌'叿&/vާ ޽P͡IN OPg[ Px䆳W@>NLpXr/_^|Q|@r `zw϶K)" N' 9ڷZ (oEk7)I4_Uo]Bj&"\ܥiFVaSѴu̦@uѾ|EJ%P]ra`EjޙWÀ7JKqIkJqx>ղo8}ٷ6WuW^( Q>+ ;Ԉ4b}8wL#[P+Ӛ/ʓ?t"\4'^T\{/=a!rJ1ZN>-sew%tc֞|Ol]w(,/˾ZoWIxP@HQi36%BdI` YUeOq}?zeSr))!jS^Pcp[*D,)ϑ<=SSyX!yqD/<ȣȣC&kW.~6s\r ]M짋9-Eɭ3q]?Qe,j5s1_A{ ټ~t׶.\TqӅb9OG~dҶ<-PΩ2甩huofkr>&S%+Qč[2{6#V9F>g 9W?Z}jꌽ[x FW$Rǫw_UMt&TEʜ Û u=X.)EĹ Z#4IJAk,>[<1_>ZH+d"c1D"w$Fcet.((Q Bҳ^ Q>piԚ(c$r 63sTC/MLs!U칯 Ζq rL=xOI<0E2S ',W\zZ{`:y 0xɟ)l[ nP챛EITHIm|yô˜ ʲb9),#[l:./WM`Tp4:>LiQ!6\qEv{C Wa;hPߦ1f~k'ba)B7Nmʶ|A8x@j!f:p2w_l`OӀTpU\P |eѕLԵq7GFS\L:vI,ES$AJ1x\قBA>Y7;6es n1<*ǰww Q" B.&^@/ʱ֘KDF  oEqRiet|trH&'Y VMR+w=QRO}' ϜJ̉gJ9k{E壍_~QߕtܔpD0L؜LQMށ.HVAsdB,Wl[2!f'dBtOԔSqg뱸DֆJ 6Cks"* <9#K*ܯO>\^~I{ ]oJ2wcH:,W( QcӨGm7Gީ./M9棱`FeMZ@˱ &d>Lj9҆F>$Lʐ*WƚRx[-W1J_H4*FT.afpI{g^=0JD>688"xgP]R6,`QL/+*b 1dmmŸji,B ʍ+|WZ3lX}\eTPPQfUYzZ.:M"49.މrXDk!P5]+.weNMBU_游38q4pg?[aGhbg*NW?}82?xUQo}:SX2>BE'.&DSX'(4z0Bni!x!$\!$܄JBJn5T(fv&e\)Yl(K7Gh?- 8bw^̜?|Jz}S(Ee9Q#!x+I8o)҉akml)SJP)P"6HzJV}$C@X8B;؉s_ ]Lμ3W0eۻ>A~yv݄:gY W=+ڴ9*R(.A,['6LK8[AY(Y=Fp"9vqI1JeFC S\j@&w-ُբ1!R|RwjWjuY #(#̘|p茖et4/ Qe_Q}i߉ur~MFZl C M]01$oD$Vu;$cn ᘰsmq*Nl[_9 DboɐytԤ/e1Խyp"Qdu 7l ٬6g|0ƈ̹#jE]B9Z;2ߧS?<~AswY1]Vz53|&?XN>=`u=R{5q I}lsTT|$kj񮍞0ʜa0kz|iwˑ}6ͧfNgf=Y=g13~&z ;ONgОt:'4\{ܤ'lPMٰݪw֓bExCI38*AbB/blKC.U)]N2QhJ׍p~lJA;/9m("qlAך;_0NY#A)-HR~Zoqv!S#˗vGѷ %Ӻ uϤH3CNٍ \A,K-'OI2ƮX+Ϫ/e,R̲n$.CIT0W(xD@H7pw@( e-"kMq¡s9TK1^mA+єC@uM;~"#޾\F;Mb콴h%ȖR>Eε`u]#*Jqnz+k]_GŶrf9 1@Ià G$zDMիe5;aܷrA JAéfg7mjX+BU"齢P "Sw3.QL`Ѝ,ӡ̫8oKr=t'1Yho)arb b9 -{'.͎􍎴IB-'zwj)Tu7Ԍ1qI$62 j)J.Emgܺ8ʗ_gcxYݫ_{eb8? =->. 9k'U#_E''ug8=NRM=.TeΥy|ed3BYOiy^副 `ؼzȁW^i|MM+I̐ĘS5g bכ I/ȞB}ón>QRLHqz_?-2\\%?sR&罒?}l+yQzgz۸WY-݀q:i'Ӈ"\KJ~{dYk:V\r%3Ùߴ%lh)gp˓(U*kcjjguh^?+3uɫtr<8\nG\a88=疨.H`ogMЗ=Qnn8ٕ,.#؈ΗG Y'^=98Lnk;Ȯ^;*1"b>qt3X>.mlz:h û8U74JQk⏺AYgvpqoN^x2B÷wMzk&5}5]= ߛJZm BKb0w⪽[n=eԬ|ѥ\ 34QQ2Ut*{*"AL#|ou?x-+ѭ:ui^zRw'V*ąHSļ0l8H!GR@bIOmaYvI(5<Yds;#gTq9eR,)FsD\PӽN/:[ۚ vÝX\wuGC0 }[sGyKf(}=dӐSTnQy(Xa,- Aоn$UI2> &gٴNf9gu.n6?B-;?fAdE :q5$'F9<0(Hp}MIUrj?>ŽCWqy㳘:>2@xD~3JͲo$W]>>}m(?OeLiҝ[!Ge$Y_kPװ՝\L䂈v\܍ndZy/sq$z܁iL"'|H+HcoVI\m\a^@Ò 94XSLn p UI{\ǾVS6ϊ)+Ք{  x*L\ 튢`%!z̅ s!:| +(#A+g!O0Oj!D'}(2_LlT\==>zs559=Ye=,Wykz,doi!e(V:IC6hGFX#ʱ6C#%!,DܳPTk~8~홅jw! VmįUT/no("&53N8S;1Ֆ1yqz 8£peCb O xg5 Vd- YN@j" (<~]ƾqf8"ucOq =[/勼jLg"(͛No`,((Nb::aKQC 8^\"N0F"[}Ϧgt99stymvÙp$Ag * hfi +<.CZK~4Ky0%瘝`{O9]LQRvD@1A*APR[P-QڢM_mEǞ]WCp]DK([ lqz''y&7pSfPcFɿ[DP^9d4o5DYf47"H+,`9RT'6~6Iaoz@G)6o\p洰쏪 Jv%^dCsRbW^IN+ *sq ōN*Z8t8_]4r6%Jqt55@䛟6&wJ[v氪'񛌡]c$g_D gߵ>r/aZ:_gгws2~,ß!h{j:u?{c 7#M]vs:_X=ҹTˀ QurXyw:?F׳8~1lDXO'?8Ra_ge_t봌bU94!=>xzT>:`~x}<#ɯ8^xx\j`ՇQ)T #^28EW:3 ? ٯw5 QPx +NevhSzpw'y6-Z=xC{1~[z4]v~4aw:{\CMtYAeno7՘]Ạ҈eO_I "«o]ErǮ"m;T*Ȯ8b<1pTyLleXPXAb朘dAK>+<>Q*չlnak= -*mɞMBwY[>0fǍO/0v^艆l0:]裟Q'ߜjI tTJQ :*AG%BLJQ :*1~KJQ :*AG%ytTJQ :*AG%tTJQ :*AG%tTJQ :*AG%tTJQ :*AG%tTJQ :*AG%m*GFk{Qtf`kFxh.<9-sϕ$/Dꠈ4݁ҏF,iaa=J8tǴEՆ!ɩ mP•cJSnAqtgep0ƒþ3' -R\ߣE5ʺ\"wݾZ΋uzi~ ^Q"k*i1%hAFb힂ND0zѕG xU;yi,c#"cG?'1Ʀc_!=QuAT[&uqfgxW/ϣ'~ˣ\ȓßO^=Sp0vI!0@=|~פk"]3稛 _^Ð;}`mjr~~^.WZu/[&hŃ(tEv]̵1pβ3giFqE]ʞH!6zwF.s]T2@xD~3JͲo$W]>>}m(?OeLi_eX\qC"@aWkjn$vu8>Bh(Af,ϯ;gKzABJDtW"'|H#+>z^83A}hq ~C %b[dM?]o,Erh+;;$i{bu-pṄ.9 z[[`uAPGx I6V=י?|zR=_7>T!() C ]k I)erٰ|6!jrry_@D$="ƢuBN9v и'둵/?cKB@F4tEa'8Bg>`{>iޟOog%w1??) g4$#QWTVh: tQ`F0u$7ZYRQ(z᳐A]-2!.eiLh=14楥"J\:Rzܕ ?y'|7芫Uq6O bii.Q룡^r\V)g>?=M`{͌[*y~N/oӤjy' /Oxơ/EGżM/ZS·T:68þFԋ/phx_cP8V9vcx/F_Xxَ3N>}<+N HX tiс!\Ŏ ^VCp;JCpwp(u}PB}̮{17z4&g0:tEik7Ԛ'SBL0nچ⃠ S$.琳5E@BsNRSK,Uj$ YH$ٲ_pA^lk A%jDLϦ4=ON~faPIϧkhg.<~?̟?/\Msg;J32,.ㄙ{vtvuG'I}{F4[_'eI!&7[lm=X0BNP5E>l_}^嚫Cw9!78ybLWRBm5Y¾<x+/4s`rŝشe;v]?/+te69/oijnvh>S64+oBdu _w~r:9kbZ~U)äe`TGSB= S6_bb{"I46n Q =W6sR,l՞)hG}pPsRϮ;8iه̧qѭ=뮳z3/? n1?#z˂ LNQ03yҪ"aqk)F ]H(߆Pu--Q!)v”[mRϮքgW]`:bULܙ7e]+v~_,grlѢP҉t(|Ǡt+WUɽ U| |{ƹm*GLqq{i͋[AE淶Ql}<iAC WT"eO&r2eaK·S5jm1y{#~oXV5q&IiftEuԅ`\x0rj$ :C%"P@(E QhulM<+]΄MQHӮ!8y/cQT kTBrRq8*~>;Oq'0 RBo(1iCJIitŻfULS]vo@CkJQYQG-mD-HbbQxbs6rSlz}z 1`RdR"TO‡j@xN$] oФ}I)w֫Cejֽ9 \2j%sg| iH$ \D.DeG; tD~ϠkpMÖnkAc:yҁk_20?^wQPsC[;:_u&K!c|{Mjy8id&ҀA(8S &;dB:AG3 s"s'$hڸHQ򽻤)dCj #gS9k(Z^̑]%Y\DcBTCF!X,Rm 1 LUCMM|žDE6Qkgt^ ň.:FȔdE xQI+mp:v%@6(=*>s z1<)0=^k$#=o?$Aw3MGg3%Ro5 =e&}H?6HбqK#FwVKFʠhCu8T ؞R;~Hd.(,) I*Q$!tK JA R0&aH"C2oHcu@|z⯷lǗmw=BwbŇᚭZW`Ѷ1 VQ1#e 6JSHRIF. 1%Rx!CR-B fF9!`&r|K?l’t֠XqS*Qօ٬⬯@˲.sr}8`񱎯cab,)%! 'ށp)'tGd$:OJ(bo4TڦEɃ(xE3x$EIc)fQ dBaҚd>m;R) /E_|ʫ>UPő$9T}1Ow`E*iމťA8$`жYrPQ J6%96g?3,Ư>]T7~OHh5mi[ 0/jZOSEF-L f"t, $&r%Vg+BBM۪%TV+TTlzrĈƥIq.'+R֦}6R-c3qhoUf;cW[B)G[W[xN"5ㅇ[2->-Ë?\Ѕ'Wn-k`Qt*a` <#gu&Rv*hL n;A.PZUBr'W8苂'sacLfgVy0jV[V{D휍٤5bPRq9Z-8PR8l) #Sij.DB##dF8:;5)hUV-GQ| P*&A5+kwl~5CǮ[D-hDw;eCI%!qF2H&g2*Bp>؂F\G2mGYIcl@K_:ΕjgX*}V:1ҏv/PQLlL->d~vq=\g]]vьvqk,QHrgL^" k$QK_ow|2(Ulha'Cg&Z|Z\Gn_cEpc3U?r@ycJ].&թi@'JV.M:b|١, 65;Qv:Y1FWgcO*7 ` fīυW5^x]]w?)`gT@.!]?phZ/y}\^)g>?=I`)/kǷ*r&2_p}I{+zً+jϧIinX ͒_;&֥g#5wKzQ>PO A>./ _;$?E^V|Mainp>6>1ՃZ yl1W<+/Ѥٛg]n|]ίE=Ϯ.'Kv5NM6^K*w;$U:t1FC81A`$>?vZX93( z $6J=3'm‘ sL6[SKRNeR:2@q:-Gg>"44eZX5η&p>ax}4?R*TWy4vT28o"\@cPř%+2yޝFʚa?xKLs\_zvlKc=Q%g/]1Iya:i ?z cqж7^Hș;VyP^av mcT ,CkA𺱞5gK=ϯ@:{FӇ2ٻ޶r$W~IH/]X` [lđܒwz}G-;2K IQYUdg3 k\ ɦ䃏޹l9kpL2D$m]=@K|LoKG~a7Yk,9ENדʐcDU3_&kA jt" lFR(*VY`U*Jl|E gUOIqhY[ı`xލYѩ;*zta!'ѳK P& *!;[Q$ct]DefshJ4SC*|*R4"c(ՂM%#cr2SnzZy30vTšiȡ֡[~AЫY]X#]lu!]?[?J=Vq/t6y+YMsot/a^Z0wd֥i-LǫAMF }4ctxzy'Ϭ}6%rMQSF[& DT6 ǼSx1Fk}n[ V"`\ !> I@;Y RԱMRV/Bj#C:"ڜPW߾M7Nj<ݐܛݧ}4]*$6Ԯص8eБFCGw>O=4G _;Q&(lm*4iG8zYp(*8肐MaTF#j]JQQbfI:aatS.M 5*/}rqڀ"!ASKK18j&np>qZ}haPݫ{qnm*#}57ߦ>iam`^`rU ΅ NUCK4 R4&YU6R@.wBhIÆcCvDrg{wJ^{pW'Ԟ^ab`~qw8|aw8q =NZ|!w8)9.w}?!wU6dUSqWUtwU8W讔5z \L>ǮC"GKΪZ?sgzw9IlP\f7wZSN;_DB&H߀ߊ7|2/5 6Ue1\뻥m:j}N+#Ca."'9' &h EHKERFFAWgA ϖ}@D~|duo;ón͠lt5Rލ~]Rx>{B+ )k%&8,kl:翩 C5nrJ?͟N\\]LE/`Jm =*{iܯiFg15vz?U [ /ϓs^tj\c6j?4ؘ _S_zw.?L) .$3cGu+7`g|&$)~e0&~-xt w~: ٌ֩\؇AMW[z\J'|gJ5ZKB|-{I+1WRBvTD(!-)qkP #F>eƲ5q68iD\nmŶ<:|`&|@K_6آ闛6okO1'P{\Fo62Oկ-Q%7޸qY{^z7U˲}*l¦7U.Қ헮Wlns?6D_Gw/Ϸsޯv<;޺Ww\xݯ $TIH$Pb)Z4@s6ZvxV_-4aC8')gH?5͋:mg+gM˒xIՍm8me9&KR2DeEQF-&vlA f\:;,&ʅnB+E#l̂ٴ)JR\X P+Dū$ovqIy]mjtL6Ɍ̜&]2BCϹ3%_$Bx2>`c/jckp ۄjoK0hP8R-ӷ>hfikC dZGT0a@CƬ2f{AϘ5iG2S lAYR.t)/A+ 9ȘmOi YVNs!e*H RyXLYb6 $*7|ٓM1Nr4tNx~{9=ix~;t7&g1k62eVƺY XCAF1X<(iDƔ3V(Eۃ^0Ki'%Єٛȗ)-LҹzTw[ۂY߁4)rrא-aNH+fBh3Z 4tѧ/~:.4 2h.#P"u>8&XsQxy PKmSGtEɣ YSC?l> *> f#bFY/ b 5Ҳ+mMAIEl$ND-m0)8'066f66 K~ck'E%O{B@yv_ W#L bȤF th"0fbw={/D`ڞZJJ% 1]OѸT|P̾d&jG*|36ӌC}j |;W}XŇ{ }71l{lH" J.; H5Y6M28SQI h QRWAU E)\an'+O%"8=v<J;k@l-M녒9gZbqZO>KfR0BpA*Ma1Q hfPjfc_Ve2pdoPH$Tn&nؓz~zruǡQ7zGܚ=dwmm#~sӗꛁ`9 vYn`lK]8[M䘶"S0M6|]](a3w4NhdPBKDE2iXL`<ᑉnє}(S<̥gPؼ(GX.y6tF#NC8irWZgg\^E^/މ"T5QX"%z}ɡ ='zR׋Ћ;MXnag ]%+~|G) 'G?lz.';éX*0AI `.]bjRNKJ 0[ dぐt!w(?DEP.CHKA' QE/i8VA g}fxCd;?YLdPC2tS+|1an}湸GON<{JH$R1-XmET2oD2kώaJP TL$Аf"P8,Yֳ3r't޵L(fX”iËlËُ]Cq|Go~l*>JAS2I8R{)\+9 9 *0wP/T/d&thI.{I*noo[,zSh*0vuP/UO,fQ`~CtLTs l<;(^[떳_T$T{R}b@n,FTܲuύ;_q BGFPRJڱTj=h/>صC{X-!pK`}zIPi^QQA j ^K't{ؒ\ڜq& ̩<0tT.gIh :ҡ:w,gUr;QGGw1*ޕv!E%&R(S27JEIA͙@E-DsZ94 EK%^CO?1 푭s/U 0a"PnTxF0p90IVgPOO{?I}IW#p4J2* (E+fPY>loIN+n$Ng;+ص#u̴V*l H5VQmm O,!iMG6< B省m6eljޙȦ;:^gѤZ:Nl&L39L]>57rz^<۩qȦKI6ޞ&eդl2x|z+ 嚀>6ibjIY&ͤ,j=߼P и'e|)4uBYRi%%Z[: Y疘C3ܑg6A(/'C!F*P (Ġ#i 6w%VJOh⽗nPg-G\@=!&ji.bG$с>ipT_tEʑ;TvnGy]-[#NB!{.wja.RgZWl|h8ɭIyqb8IGZj"EKǙ-%%LLЃ=-8R6:PTY8$ͣH 8H/C]mԔAg" PAGj!`nܤ69:L8d*Y(7Yq=z9zvE#FO(\ X_Wmu*K{#{??}{QʥF;aQ % $D@˧G /M!$-7ؖIv";u3xg]П]yZmw҄Jڹլn䕐8ve J)N)I7O<,E[ZG{YmI>7*)6J :I 4>H%iU+2:qn < h18(8Nj"0 mCES0$@׹1"gM/kM;Ych34:ߔɄ0Lt\+9AS,xmGƅBo; M1vxaQQnnGr~O>CU,$m@x %"Z(2 [b.l=uEXn58lBEcs%TZ|HMe>TS4Tg'aXiAA`9uR(В Ϙ$CA\{M @& } m Ύf. {#Du')R6`gFk$5&ĚKj怐zJC>Sh]hfP/10>a`+GqZ.d1qDMRO8Or, s䵇IK ͝j%_TF'@/ʟqGTygk> E(s>$ӫ~<5^$Z@J8j=!HtByt\ J- W8 8Oq(|J}djI*mH棄#k9qu(RɥDORA_LO'~Ve>MI X|81C[q&$T9>+4o]Y-SVySx5/^g_3]ř<q=\ lZՆ}}l=×@HKO֞@Y[7NZYf(!G̣-hxz `zad^l[5%\]$OFFrW&RW_}ZR$[_;&o\_*Q*pSyAHR gsSOԾQooJysj!r0=.fJFpP>敍o!CrIOf9:%(atD+Ȑt; D锨ѭ1Jъ[~.SD-Zޜߏcqm^hv*baGqSl%+i(VAt>{3izn1<.-.X_s-n+{/`nYFr#/BMZymFǿ ap1v(󃀼8>\&MM24lj$q ݯzzf4CJ68DfWX:O-|Z60,8 <wCܽIWSy&Ux‡i7#SI4&5Z*vS/2әB\iDjp$+D$6uuR%K6ճ6YʵSB;(P&T1&ss aKJFSB#lғ W`g2'r+(ձZ(a_tT`}: }%B?ʤݮAGe0%}8"WylS0LtE&Εy̛ ,sVw\}g(Yq6I3#eRsF"'2R2BM8n8m9 Iv)=fyMӎc:I,zw==@ٕ[=nCkܺ^n0!uN=`,{T8z'i4MޖG𓏓 .0{.7 %#FHgp#zΧg?q׫h2[̗뱁&_tuϸr? W?w?j6ՃA; Y穛n7j}=Y^J,?|Tu 3|T~y>j7Dڐu~ҳ|׎|*վۈp'O_Wy-\2~o>k~u(;.d[ze=$+\:7rY+7FepDykl] ݢ?wb0IKɥO8MNMu9yQD`xvKNwү.<>\ۡU_Y[}~>_|(MxfIl'.΁=y'@xdpLVygćjc&fUS ݷ,OvڝV QD3aؙ=9!{gsPx]A!@_A2Ÿe0PO + C2cWX"Z;Rh3-2%,aƥUVBd-x3 JƱ0|uHS2-D:,S::o$Ix撔S|(ځZ`Z{58*flb\{V6]~J;|RhKP+*oҕ\A|IWj7U*ҕ0]2P j P}veP  UVQp-WjBP-}dnYyNn̲{#Ƞ7և)'ZU.t8"V,mβ}m*B]ľŸݷ_$'pVeEԮR&~uZ/dGTn8d)WpoFg( _FgӾPe5my>:5 vfrWr = pzJ쩸BBz++/BZWҘ)W(*opr% Rw\J+N,#\K[BV+P;@!WCĕp@0e\\|U44j\R!<f{+/;Pee'WppD{+,?oQfjw\0>H\iO= QfR+TEqe(uZs:bd4rKinFx%BWbVЯiKB"^/Ӣ+{.)tX} 2mfZ4I,dRĩ126Bk## ֛2I|1@-_}*mB1J*O XIfpP ޿C\\]/F@3}ָF9:j&w~ڮ5S{:_4:ԗ*q*@ζprW`3R\ WLQڃk P&Ҿ U2p5@\qu+A;OLf 2+PY\ WB)sQw:WBiI7B֛ j.+Pixq%5{+lě vJ+W+̥?S W*oA֦R'j8Z@e]\.}={q*U W2-F9ȵ#A ¨Z[{gKY -g ipR߻Oyz/G?ʚ(K :tMa,#1 ZȒX"q&8RGrC̯~Mu km*t5)Ύt­?ĥ @U{-k LXQZVX[jsRxRJq_GNC*Fͣ:~1_;7N 5v7pms6[Y/ s^3?aeY(zo'5jT՞+4+/Q+>Qu6jGV:<ҹj'lw֞>MI̕^3{СVklM,C v~;fc]G,:WoW`d<;ٕiY@r[3`_1ehꜫG~%1_NUo&kt8[LJyK蘘6/T&L`Z}mp'z=mF7=_ ѷ~I+:@nD VJwo['&J|N= SM>d`9ԍҧV/[mh%nwߖi%6Zm"7Zq|/2vDCwTztpfҖu\~ۯO gUcʳTe@U#dW6N#:q-EBxo5/&KG?ncj;tM߅u>hu0VߩXz^߿)Gr1_F3]KK _8?<'*O9$qrw:vگAbQ1KAc{z?~kY`mKqSeĸ2382G=qSc/Kv\")IqN83Ꜣ-qwv6(\3{|['86:Ooywciiyޜ_>jv,k=kN3Sqf3oZ{xMWq,^NMUSlN Rg !w8AOHKˈӂГ ,Bܼk_F mG|k C7_ YV|mg7ȥ2ӰAe+aBw^\IVjY7Jn^ dͮ#b$Qom$^`Ӹj&WuZQf*mWv= W,\\i|նBV\ Wb} r PmWeT&jSN= 6\\F/BWR\ WJA}PR j P}D\ W>]h$0Q. Jw\Jڇ+Mj01/B› ]Jk+C8AfW oQ-}t*"+%-{2Hc%#A ͈LufNIYs/ 4t؋r|7!{Z)@zc\}1@$_}*+B{jvb~+J#:ҎgpUkRlHJ\r.G opr9WV Ujp5@\1 6KHVʀSkOA\\A/ TJ0WIf@ \Zyq*Wĕd@&]\#@︒j+Ź>]`m7T\70>D\i  L+˄/BWRj2TF>ИcF2fDV>l,>K#3{9WxӁct.<Ӭ;Ye}x5838C283oT rpf ԧ(X @-_X*e~95^PXV VqjCU#tlU# pz*> K P w\J+&9**oprWV UJp5@\q)5  Pf0j;PWĕF +>K3Wp5@\IŸ  (y3wjy+TٷUWR ] Xg#x]Z+T)TqR=XwHf\܋&{> WVWR7Cĕªg11Б0.nkn>l2E>KU I#Xv'ZGo)yCqjx(4$BZz+kpwH1z<i{*G ެ\>~@O܉uADT[_LL><,j;OoN޳7@?o1лy tya]Aڴ/_ ؼNں[z(mCo䂩Rՙ\ʍqT.$ra*?Ƿ9Ǐ$߱A|C U{uAiq:5aXt~YۇR[7{cFW첥ac6wA5sLUc)TQ*㔶sե XWեa(M6Ym0HK ݸR+UE@5בb2*cN]e,Z0Ę=c %Rkx(VJ&ׂňhi FJM6;E &Fe%\,nmYUDs6@&k2mmĩ)cDN" =t,صvh춸1kZ,\T rz7^{{"<ܛ<{z3C&fJcw S9YeKk":NrX*0B͐]4,4ʚT{}b0fgkZC# 3 4b@~޴TO.`8 ,s.̱f7G}jůY4W-wvZxN5bU71HGI+*G@o֑VS Kad;f7E"b'QjShіre_'̵| 46G$XrxQkU {KFC66WѓvzUY'rdŔbS9drM zXvHɀ1!Kut,:kQ@Lu;aVɣ^fHh;\,ڢ %,v**6(:jBh:^vδy樌_4 ('έZ84i^Ƞ xOqr8qYdFǘ@Qc it"B#NȿhKvvL0Ͷ]MޝǛ^ q ޚ^v}@6=D2 o b9PT8xi=B/[0uLD$W{HV*#d`1VGd6~χ;/~wu4P詌:#)V4XBh;K)G;?2b>MzC*KtK|̡c8#tϦێa )XִhD:ZΦ$v ԁ>:AZx _3X-Df broٙ%3[j4#pa竞w:H%-0 ᧀG:ʺT:FՂ19MC)ܬx^+=֢U 6Rg(HqͤyJ!&ci@Zq!; tmr4?B]hVH5 oZ [`V>ڢb* MZ0hMEkaN3^F\.vO34%0hGh1k4NrlCɵa+V5VC' 0@.: zul)ʝwKC`PrAuʃf Ԥչ I4O]hEWKB4.?!t+kp N)Tkle_nvp7 ťd` X4&o}Ds|> .n˦St]jT|C__eEW{w Zoi{}캾jjskyy ۫M R9|xoTG8vq}x5GSj(ϗ{ǸZRlu _6_oVyqst^Vo1owӏ7[|K:M^ Im7m/6w}:>lxv%!mz.1X/V4I3Ȅ 濐?t,[ j8tn8кV)[=Cwh "Tĕ6g:aZ}>z&CfឹZ~~d='NM4$̬Iݱ2a }۶觚,.{P5WJ4Df}nоyiGlfqtH,#I;f7m*ɤ'.:NY>i\}md9H*xlܜq*+qza} Zx].rq\.w].rq\.w].rq\.w].rq\.w].rq\.w].rq\.w>Srt= tBr4'.(Jحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbV>9%dϷy:v+dV@ۭRy>Kd/Oحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JV*]VlOVbh]xv2ح rحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbZnq뿮~ry]/0Z 2N2.; (ZR'Z; (YawXʥ/&NW7]}\/]}Z]}]o(tХ'=.]pp|2tEcg;huxt5P(t J=!`+d)\z:]e`#t ֝]SCW.h~t5Pz't ǕSj re)p ] @y|+kSRWjƫiZ~ztLdOjs2t5:h>u(4ϑ9!# ].Y{*t5zj r9U \oc_WC(Oٳ՟ݏ|sܥy5gʬIՏVWy9|N'gԙgھȆ;~)q@.WtY.ޭǖK$~/ -ͻ_%,WK$(n7;yzulku+o ֵɎ[lzwѿ!qywqC:Qtelnys~:N CbFٻFn%W`cY$.ܻ@6@/w #[^Id},3eT܊I&Iw]MV"U톭?o?d5"xMH$ήf< 8FkO'w2cTi~n|i0NΞ \VLlwM*%p+zFoBɼ-\P7fW/z_+8 Wg:dA[2WuN S*}+RWagE\U:: pJRȓ+W+;\FWj^_+5 \ڝ3{g3\8%8Z}*pUup=;+c>!b\Uqɰ*%R+k O;bq ViKUJ%zpE@JEaf"܅>R*+wEywBGQnǬc Wy>Yy|ۇDpD[ʲM׊뢭o{tM.ϥ5 ѹT)oA(2,# BdA Eq-aks9@5[y p<R!8@ H$%x ca6ũzQe?ۢBQw5ey4fwĽZ9Vu%<\X{ q7^jwK~u;w~z;d`v؉_8U*Ǝn7b{3޷Maukj^ ϿYw}#3c{=<#yrH읫!d[z,Vpq_M'w/Z߷;|r kﺩ+-Ϟuiʟ+]N.7y7Wz-89 A`KYHLNRq2H=E W}4Z aiM;MʋbJFRI9Zb砞x gjB)35:xRe'uf"䶇t bSFSIw`Y=D(hSDޫ 8/5.MwcDFʨժB2g - %(2I0`Mt`TXzƠ57+!NydXl6$,Dϓљ\.$A55c ͱ7̟LAʟQk?ic/@3,H><3,̽s ]>^ͮNS6ο , t nPbDaQsQ(PT07| m ;AsE!>MZJ5Z J ^D>voӿ>c_󹙎VB|ًVu_]zʘ:;\>"5|H1 zs)O IEc>3m]Lv9oR);?re=X[cPb)(ZWFh&~ ʳvkE~f#[KU*KXJp61 julIBȥ% .1oy%9G$ـ2 THA-cd %Bu8qv9ٗxb2FۘiXUS k @2)8XCҚ$l2yK[w;"HE,r[j|}XdW _S$f/hR-'Q{b @@eDNs>FiMm eXߡx,XǓ}8dW逎Mi6ގ^aF8~u_* #N>䙂MAyIR%]n^(vhܓ2t0VAZD}C3VܑgvA(~ろ!#(eyRDbБD9#!VJOh⽗N]Zz.hCL$]y7;H|2 !pCپ"A|:қoU\;Wi6Jqy,bNڭƱGGτ4% bigr%vڙZM6RVi=?3w͆^{x_.&`D_.Z^8l!s5vD}ޗ{^_NlrBQey4"AzABKK+y,:-*H-)U;\*&.&# 38 gR>bj}׵D:#n_uxJÕ+t.?-DzryY[o>"ygtE(%,y0XT:ya#<D,lS긶v"OSqDEWOs ^۔Yogڎ1 U,sϴ#\xQgx5A Ȃi . % :Dn^jvjSy5⩽t'Tx:(湵`DPԐVxecL\y߃Т[`VH !H"gNJM#)wup(?'LR`}>tKC/%/b@X٩np&0 ,Fe,PXžR+fj,h +14 .p8e ^SBTGmO[%#+q>hH mU$w xL UMU{Uy_;SF;*ZΩM T0Q޹Z hod/&DG5£hVΧW _bpRk-8w{OPeS2%Xn憨! &4b;9C,]piNAdϯ?ۢ4n*݁#6ߦ>l?- T*$ƹ1*P X) jE@"A ZBW<(,UK<) .aEt]  XAO9?Eʄ jD?1h ׭e?]Mu3k/IIj%HrcdObbP\7{"i w&ˍ^[:hmG!(ڳ āyAdGx %z47D}j&(H4%Dj8b8npZڛp˵W^訷cwPTw=]Dd>yvWGߏT(+ Ѹ9~% FdCW(xp4渙Xx ZY UiGŢy=/?癶7!&{9 E<m4Jfj!ޮ"a]|B]X'%pqCU#w$)NPfF*DU"Wtj*g~<:lvK/fUCL_ȫ_d|[GEy"Ґ??^;~!9?rqYN\"W+-:eTlfuS\iڃ\ @Qzk>m;+w{҆&Ï6X_$_sͦhgգfEՏ;ncv[1MEvGjD^OX޼ʊcb'ߩuf5Cx^o6{S3ةq9$T-C\jo,Zo8J9;~R)3>^f4t}njYֆjm\̫>4Q2LQb1D{ ׮=Q҈!}N#ޑGL#悴`?+?0Ez?:pQo10ŎWrNl0Tu)^D$oSG, Ӊü04nO=nX*֑ ЋeJ#{[E M( 98ESt`:C%x~G?kגD YG'$:N(OɥԂp 2'ug*3,*4 vPOj0"W WMj׳ɵ2Τ~Q8esu$ XrvV_8ZgѼ!94זڮl ?_fE^rXBZfDΔt6 kiV}ytQy[rt,t9|8p}9 W`=6~0 >A5?~~j8_35װBwo1[->(y+v Ώ?_}>}՜Uei5+fЕu gݗӁoq+ VR!1n@ \1dkMb{IQ|~Rb5D*g"N%nu@Z8Oz F|M=A@&'.18# N)$oYJcTRN':{OOw^|$ύ^\U}HTKњrG)TPm/,E=S򴃘"hai)<1 <0bbA}uH6f_vi3]{^i>cwN(A #b$PRj"sFLQSG`x `&#rg,nJ/qa,&ڄ+r0\6.r7M}۫dm,r*|Tj{\\joqnQއ(q??rs d.0ゥWU\/[鶷={&6ОC?}`jޛBb2J =9vI(Px0Gz_һr]P]Z5  4"orҰ\UM,|Z1j)!3A*S0ObutU__bU4oo!o\|~㋫kmH@Q`&$ ]CM"i>(߯zfDFi[g8MOUW5U4D3Zl'ߟAhC;C |)EL(Dy4U|ldUCLy.*gRnuz'y1-+8[pTa^D}q_OGշ_Wk\Q_9s?Ѣvr;x09PQ$4{{= aQ@i,/뽏{9 Yz?zcx`XvL"%ܕ`, ̵l 내G[)]CĮS?>2quq )ˆPxQɧZS>eT H|0xѩ,:hE1CE˾Xc_wn4Ĝ•@@6R+*I=IeGemh5Vhc"1MP )K$ 1j[yR9&[Nm N [o"FA*g5#  7Ut"!E\=gWai/BA~8zWU(&t.:91l2*6bߜT'8؆x!4d-}i\(ޛj`k8؀Mo/$3n8fZA$[ dd.d4hRnB~PQwH>V<ოq={:2W˔f}E⿚Իa(ː% W.awy"if!M׆Sz@};JdZ58Ս7^ݱ`2Ϭ[ڤխߪe[wzm/4Y;^zE7Np@ nIѕ#ݚ x>jTAtRENr>u87m57<gSכnd n;'f JwGN\;vs.Ds~Sw5zw]'Rqz̍G9"mi3ik~$)3y4).Wǒ y)] L`Ij\exUͱ+D>tsQ2ҙhɭ\!`m ±UF 3W/\Q "GOo\e\e( 4W X1W>C9xv()h#2WX WM*]-j2J-;s͕&QqD*>"s }, rv*ݛirIW/V q!B(M/JFh0` "?' 7pZp6pҒȂsaΙ<5B7#R.Ci_7LĽAp8^ɪ) HRӽg8-)~_e ]uP*aȍs^k6ve(=-֖+7w*{ /w{IUHxsRlAsMxT& ҔhߜǍRK)D-^y飣Jb^yDPO3pĝ2ZuaRwa/=;%T0W>w{wb0W3W>zeU2Wb-cUFMg^bkE\!`9sѪ7W!z"XUbVUjJ#;dă\ewz ^[mxp$!J[B4R@iR&Rg-˟S7 n¬֋,opTW&n~g7ot9A#mR+j7vj4*R^R͂3fA*'(x-wikqd{&)֦p B}uRT?_,4^1gBXⶋ[]Uw/x?mO WVDTu @D颶 @8e]]W}zPYeԌ!#ǂYC!bDh⽗Ҵ*>3mC,ɼ#K^I(77C>I9?yiqw[.nű+qWvϲaGRP~DQ D-2\q4QVCZduQT$̭!g~Z~ɵ$#,5EKm))a /r)MN(,Q$tb8H/Cjiу)|%N :R OrTA҇*&.f. jR "/DgrT@ #,ֳ}l&_7sqvGh'\k%2׋8] h{Uy%.01JYNRF?>$K,B)*PjCpFHHmtZjRpBfxMA7& BKgF ~)pOPRR hNG *(nD KV%iQ唗Z!}6B:rB cE0*'!SeF2Ѓ(gR@{5e+W^jk~wo͇_zޠZ/oP^%/<M2 {)I3(:P;mMsfCʗIB…;]4GWEMZ%rvS >i, U=$YgqHT Q{Qi)kxzҫ?5CPS.*RT2j"=E+C0xT$y 7A:F-HI:hS>cw,B{kuY{ flpnTxp`(sa"iZQ-?w~R'-Aofthfmpy +ɑ8%ԁMBz Z# X(t>ZKcF'61{:g qpjNiV[\W"~lծ6>og,Z1^9c7&@Bjbɤx `IkFh8I|mEDl˼P5WxŋۡS-'Q{b A@cDNsrADiMm eXoQ=dաhGkv@/1: nRV) &}'ץsO*"3N ]s%fEpyR恡#|=jv`]ܦl 6Fʵk <%FG)phUS *xE@F[v\h&c$ ଶq. w Mm;mD[Cz#_lنDz^kν&D-Oyf0 .YxibrBOE)"BP!nK1"A 2krGr D)BUEy5 IhEB!JD@nk@] qk҆Se)jjp0D:DF*-B.9f2W@Dy).TgGZvQ$$ S'" WԐ$up(&iۺסS{*>(gOG3=#Du')6pyD_Q0!v|IaɊ҄=oc1t ?瑆{Ln:{Wyib[)#Zϩ U;?擲2LFǯsaRsr30?Rx_}Pq= _0?#UM?4Y%g7W~K{[ns AQ8ˆ|;XLn?^!FDm #]nmì [TF Y+pпOd9f7idQ7i )jy=u$,P|g8ҳ گTWgz ;w\TLZ_77_~x{o6~w_2}\k\u30M~0<n 7?>4omho>4װBÿ́0o7{S}PoVm/J8W8A7<[zI /Wͺ6Bom ޵q$;GK@e6w~J)R!%;Uσ($5-J䙞_U#t ͞p65U[ !1̓".lI젵T|vRbґ*jT DD y" %!pmQ8&09͡ pp腩'y^VU*TN8S%&}3'4FR=>6in\Xg7e{wh[<ìkvK:KP\Q9Z `4;"[/c&DG%A$B^Vv ߓ`pRkF9{*A,PRD Ipŕ4)  (u@D jR dNΤWQ$ ҮbьwLk6QƳ3]hG0ꥯoσN?Qzw7 #[S.Z@8=o$䠕P68 EP֏bՆLFӭb7끝xF)4<[)t'/( ۲k?|+@;F;s$c_?v~}`8LO5>ێst?? dx8W<(D^̚Nsl GM,D $y\~?Ě׍KU 2C%n qӃhADWzеx{*9ZDRgKA~x/9@ uc|NP/OFCt:wGeCX4Ɏ__\}wM|XZYzo58MUA ۮ[WG.1kv27mT7J4Atvk-n5Fg;;pO-sjmvIan3Q/ȎpL؏y5D):[nԙ )vʺ,LD /Ay}s*Es]4ayHqvrg i\ mP*eIR!I( fqJώ,B[PѿsUI(d mY:6(G)q"w_ɼsr:k֔z.hCL$sÿ3&?'-ѾӮ/g^e jG_R(i=TQ5GZ--9bp뿣LCmM݌qW!???nr(WLoFy%i|[\_;-R 3Ĝ#N s&R{%LRkiRX6>P1HL451sD x4j[JrL>(FNTaD g[4F$;p!\E'4=WOJ˕nܗLo/!5p'Wgv\BnynWCT:Yju{3̦SPYN/Ȝ'Y9EkU\O-69$Km%H+r:: {SC?;t>ϱ6Gdk m`a0 \Sڞ!jjSkf 5H:Cs`q…njpൿ }l6L;25޾)hhQ7Vw{S%Gˈ7e舖^7F[,n-q%HJIlz[eo:smgw`9d0;pGXWou=ϹnIz!z7ҝr+Dn%zlCCߧwK$<!L4^NV oC&Xϴ1 ?G ѳ$xWTg;W%Yd#kͬ?uDc4ki^H_x>ޢo 8cXlP=z&H(gS ۂ >l:Q( 7l tIZU"Rx͸_B9XIEHQV9O˅GcE!1 t11 7SAH$Q"A;M]H""UH#iYm\DLEtA! B#g,¹4}u'Po\wz?[l#Sk%>$)D2ơMግցrJRR8K4E(QeЈȰVGBcHAic ׄ -:Q4n!蠵^VMa)FCv{f9&fAMTZBUNZ{f9 cgW4ǝNQa %P.@ {6=h\0A{*=+ߪB9sScIh/@-眯 ue-1L9 DoVkx.:nhTy}eIG9r(Y(ZgNV,\[阄`LEA9PAJ:+D6PW kkQu倫Eͽ!FSgw{CwC}EWYgؼMc(ŷGQ~\^\\Jܑ;(̕H! 0Mv䴳&7w`u4£}dR&N17 `Lxn [W9T9mϽXdYl][cYJB4vTY4J? 4/sAmeB8hb>wjP2rl%;F dJdNThHZsHXyM4h$B1>|4퉯DaSFy$>xeF22˒rq ȱx_"V E*p\I7(È74>ѴMBS!(t"895_aifoÎ_uG˨Y{O/|4&Tã9GB,htd/F μã@4L;CH( JFU !9G@:$NBdC;$R*&R%c1rvKrX,,2+ YYYHi/ އ}^&# M{q|k7*;MW.%r%9XPQY-hRK "gyt$)yah›bk_w ;6&Ƶ%DHKʤᐢ$j8waMwU9(dE~DqtPF-lԙ36 ,nMaM=cGlM?vزG#qc&Pvg %cxiPQK֩l) "Dk8cJ#q$Jsds2`0dJ-`'P$NlhM-M'8^gkZ_4-E/nDI$nd"3I-˙t=>WANFipAT!bgpb[ڱ?-~Z,|_UMAp}3E?zݺXyz7JFR㨍W$/ZBS%ԙz@H Bb'!ɹ/2Ds4h+T)H3/Cg6c0%Hu_yfg]] :ZC&0+ [V~X٧҉yŀ J #EBd[\qL Ik}(0n(טAQ"a9 (A(وs)!wAlĭ/PʹI6KƼl%LCHÚE-؆$GנÓmHn)sQmQ2fkx1Z Ma'1ŏ%fײ/8?~^j86̡~R.LddIV~eS G\Yg'ofU\B YBYzZI/VYhs]ܾقEljNo վa*~^o?.~[4X9`>QO0d4`oy3qf}%o{6κE%`dЬ>Sacݖ՟oTR,(s&Y+礧Kmkt ֡t}5)`#$)'R,ɒ鞁 R%9,ݬSݢ,萳4hU.*-h91%c䁆3JhKQn%%0txME!b w,FceJ J%8nRÒbY޼r!H,pKID)g p 0T `*0uJ`GҚ- hWF-[y/D愥k8 VYR@xw`)x` ,!T 4kIU YG56ic 0F?1y5gFC8pk)5BoBr"wEK]q]i躻*R*ӻ讄Dk+'㮊**uwUԪwW/]4iu.dS}p&Ӣa-IIY<;+s@Kt]9wRduOFB*pRSJbVJes6K/_);e"?f)%aOo. ߿Orڳ ߞ( e . =&oQ6x9f},!.[)5-@޷sűy>_-ayXv~'hn^L(֭{L./r uVNy`裱&FKj4 SFL0d똄?~ۺRw=qЫ>-j?ܟ͟ X188xFk~$>S `p4ȵOZ7et1i.UdϪzwiUZ7e}Y` ZMԚr>SkS<1S J2~Hu7qN[DL8Q+iUWz(ڬv=ɑŶGZυN予XNtHiXrOkoy-?VtU*+₡KDIy6ȻR$Uz+CKpvF2e&rNX$&1x,:[-jSr))anS9#&.2de,8bJ5.jښ8[Md)JP6ؒ1Kт2Ld=Mun%۫juWz&hQ_}mM2Y ՐNf:9N|*<]SiYNK5z).Wo&`4qWIBeBQ7)j6vDU,˹clim Kӏ?(HS➱Pdqj e=I ؙ_p¹#KjgoXl$ùi4hqv/uj I~>.Ff`@Gb7w~z3[]SYl*ÝS8Õq5᧗q]O>k)W^}\ӹ^'[qGJ+ dL,|RƐiLBdAy1bXg0FyL&dgvik<@5ҍVU-!+˙ D4y`A<+ҴWN@o,01@ǰp4fGqd̐i/kT=W3J&cS>Z4hJKJwECƓ= tmS37ZS;l̍U _Օ{_:J6n9Cٗ۴=o=r:XfM$?g-lБs#rcr 3 TճȠ>cϘ#v-ȝdi?c@xrޡPBa'IrϸZXtvZy'e4.~@!*` 2ce $ED.$lr7sٖ_kl&?} #+1ZO&_~يnێw6!V|vR͗@ ( &@R ق" B!,rf ]e;\YIVG6l099uAQ@)\ہmY_4ndW)k5|=|&f ֒3B.Y-ie.?O-JbAD(D(MEU,PPB):&:"x'JfKY{!UTu&q\RJB?zIFQY[+-!V0<І$!j%/ɉCBOFH02z -jk\c[ HETrمE+j0dcTzD2^&Z-ȠQi^HJhU6ud!ZHN v}QPd[(Akfܮ$]p1wmn{cY MZ) %BŁB|E6hbEө4ՇX(2#C ESm<^Tʂ 2pdBL$ATnFwk8wE#6m5"4ֈ0hA#xPMN* 6p5FQ ZFE[( x:+!zBXgZbԉeT=FX[tǢv((ZlhFg^zq]Kf\^z zqЋ7"D!>cJaXB0gdDM}U9dP@lӠC/ NlCXns*a' U) n~|G!t=҇I'-J_) Zäe`T#N  `!%BP;ʍLrBHJ>W6L9#j"JJT/@XSp+?YPLø?]6dLMk\cug{|H[g] Pʐu ʀVQ *k#Uǰ6H:m7RH2զKFN`&((]r"JH5f.+Ե(kYܓqSr^vr5[GM~a+l #ɽ, hQ)zP\|Ǡt+ݍ`wrz$߉\9ݨU~]BŻQ32+4SVGL{9%rVsx z{YF$U$teܾٹ=մu  a/{9ŝ_<˅Z 3ӓt|_W4-OOB}Qa7gT_e<17>He`{zd_zAį;F .ܡȩBQ h, v- vh\kM j ʰO/y#H,/ R|F8cu NDmi YtL̔l`qLf?Yr*EuZ@[w@hEeQXRSӤ) ۗqa5.4C/jwrcH`{Sfó/j6TSi!e(BgI0V;hCYzf]}5 vtl B-@Y597f\pGJf;_Ѧ$SҰ( aRd³#ȡwZNQ>mn#cq$1> d11NV`҉ M.e"nnӊ͓W#wlG5Vo j#!Y]nj$}<,N->Ck『v7ttiQ_{;;b@~PXG9.j:+r:& hG \1htA' u6QKFFաM-D*QKeď EmX'b,_(jXpfYd+ެsჷW4 L֣*\TPc!:ǂI,,i)0o]ġYM4I3U%{˹#7ZH׾0qL RXxr<SJׁNC/xFD*@!C8x1SI.=,;tD fF8/b=A@b&QڄBUduDIol6D1&֨/ mb )A2JTTdHr ]Zcf܎YCw4.E- iWO?4h'Ǎ[V|E*~-dC1 bC^Og'hB7{vhL'JV.M- 6e ^+0Δjq3{™[g()Q)!ɂ.)P)H@E#Li#&JP>IlhFeLҐ'"&0)GF.ǵ"g,^!t)vrvxѕ oN'MGڅ?ZEo UN>L[2`/sԺ1^}$jA߾}lyWj#wlh -ce;Hw Q  bTSfg˥#YG_^R!}3{o?}{WC?]xhA1{-85e[>'4{=ڐEToYe Dw 3xg\П5֡z$:R~&LJy2+vBˡB7X8>F m^V6 Q!Κ(k]A"oO#(䷣螶@ Au5yBRcȖNYƤ$7(, 96Gj\FqNVIĀ&۔ydPE֥|ZaPhMpГye|픍U)T<**@,c=DESYicogS̃_ujl/w ArL]BןF  z0IX"r`AȖ)c p`o9 T{y(o`KºQ2 EZD̘ q)I蝷(m-# (?44m"HJAh-L"pD }2֎K } mׄWUh_4NFQw. E?5`HyD]02fB(Ͽ&XF)'0'Gt|![H<}c4̚doe>ԇ0LQcѿ}jXC3V{w{ъ929,Ƒp'W¼jOn2E]Ey2Ki߮foG1(&cP!^,fgu^8\\괵Yit6z\~u,^$~ʶ=|zvtOS6«sbjs8 '`|n? fϧa܃^кI~z6=V ԓNp:6 {e\K\Ï_'=q`z\\'dx Y UdTV2JB*d^(PEg,:tށW3ImnY3ޗ\@ +gr/Tj"C>\*`šf\&E 9^q$0}c婴D+#TVӥof=UMg| ҏjg;^OY,SL[|АdJ"g:܁1E"B\BupkN >SckxV>FCwemݿ78>m=ü֦?4ƻsj hwE^\PY QЄmsFxya^1ZːEqHSTٔ b jH+IL(@@,2z $]pǤi|;dқ"y_x?vqʗubExj^<%OVPmh\ AՈ~,ʥ <[?|wUoQ/9N^ꓒ J&4H0#(BE9Bq Wh #LtDB_ &}G .J[vC6DK%x.F5_G0R^R͂bTN2PZ8<%֫C=QhDKSBD #N gnKVv{!;zE'.i?[I̐;WA`~_dN-$d5W9 UPŪq9>n7׋| BLz4ثx1dQ%8n%_#63&-V/oI:wu˿^N(9cT#^mΡ=袽˻ڟotr58-?sſ~Zx8/;WY@c˕9vj .skKwڃ (~w"d?MG졼? G5'?\ۻ.Ư h*mmW|POFCe퉷ub-ʆ0ivEixcיwuL>,zkpzX >.<=;GyKͤVV<;S5&'Lk|2.}5qCZU(7d ]M> >QQ}ړs1 '-HrCI]& BC$!, \(C Dm[f$4MskH!F>nnwurNF2 %.> VCGk*ۏWDN8sP ˊ5Лt{Y1rrΟx=bmܶvب7[[_1{zh@݃k„E֌\ il؏v 2q祔mdoeVjzLۻ>/[<]D  B6xx+[y ,?\f?ڬ zl 愝LL.S)G_>>Spqǿ 2g'2Z}* ]]e*+UpXO`fMXfo9s]* ?ԺԢd;0F8;eݺSSH yC5X;㒋XFFqyo}Dۼ>&Y<0WE-?UEV'2J@fVD7`^v9kz:n'1` d0F&SZǎ12Rb )%de9uɕTUVñL嚙٫SWW zRVW#Wè5/ *=2tPWЫCjB 1'2Ϻ(uUR^]Bu  8uɕTUVcWWJz3zG:M 4Έeě5oreVIw~}ߍ?gWfi˨/.] ּ@{0PmTG}+,vx n͂fQݕ]s2~oI[;YMR W'q[هaC>Gy'Qy޾P }<rŕ Uj]Qm$"*NY_1=s]ړ>_iO'C!F*P #1^1kH6D{zyb4Ki+Y˵pAb"&S 31!OZ#U!2+Q=҆.xD=qQ9#E{ uOg" ?+:;>x` Б_mL֫՟ pԯ&QB{VW4"G/ &'Q\ ԡlY 粐TБZجlP&uU@QL_*UƟ?3f&Fi4K9aN,6!p %m 2deRǵ텷6+1hFCGLzK`ngnX9[mo߼}PM A=3GtM+&YbJT<*y -VLc_}OK~mKRK/4QAbAUIr mBqJ/aQuQȅsa=72py$2A3⩣FksicJ*SiX5t -b޾g{Ve?F^xjN'A1'R'B^@ͻࣇ/1T)0ͯ,Ц(3IJ*eמɄ`&kplJD8ƙM#™}'™Rg:GH4%(^%yA֞ ,V!dh>WڱTi=h/mO?/ਭMTT YeS;-y!  %UPf9PiR Q&*hYE+P3r:a>H":rB CUP?9\p֡Gc"g績 Snt8!hjHTTDD)I3ȝҼE(C%Eտ> \J.YA<ʈ0*ZoR*1 j ^K9gPʙ%r8Pd䈯TI"02KYN:Ft9{Y-W~b֦Qs>"RT2j"-E+C0xTԜ$y 7A:F-HIz()7(= ZjbV^ª NRMR  e9 L$mp{U jz;i)]S@46ļJf@&P!b\@,qz8x:{,Qx7g֊)ü1rUTdH&StHZ3B/q!VnGiUÉcC>V\ސh8#O* u3s%%|Lkj3o(cLtC KMY^Xx{GؤTM/<.tDvkz߸(:fxug!`@%E`84 1P (pDz 6Fʵk <45"^X}TJN)3Ey.4JLc$pP;m<8v(Cd(\\P`HҨ9Q6=-Vj';e3 5.:(1!j)}B#&̂Kx2g47^bM`RDƅBK1|BG3Ve.UGeg,F+4P"EA)C P wqǚrHDžKz^X<,ڬ8lBsD#Z+B"cFj*sJ@B{zniAA`wmI nv~?{Y'9dQ \V)HCo<E)civWtW؂$HqX("cƂ@cJud9G1;EN(bR,R;P@QjAJ7qG8ВR,B6s?KC"ʿ`+"Ri\i6T&8IX~R%(ָY=??o'%%y#U1wQ.2Т܇˴hZUalf&OCnMHU}o9"b(B'0ǧaqjC6o&|j d50\@r0y̩nU͓y [~/|=^[gI6Ŷ18\]WkKbm $F?#]6-Ce$dHom>L qrчѸQi$wB17e$h$Fmˆ\3}OBFW̒%l\c)lK߫qװ߾ח_{~|wXip NH$8O 5CS.g]/.2v>n\ҋGARݻH- YFČ|曯poA֩bT_q@s+ۭ:*xDK߹(^i$50ăbrTM=Kij#ng3%ed^Oa]_>I-aLDE6rςJ δF0T>jxˊQcZF#$C^\_;1;/Mgmag]wuE*s(0+9GyM'W2W]$)np09sH[G‰uXbO*iFR&zoҺ9x.\lTJz;envt$z:@e-/;_rv\yå 4$qV{1gN N5h4Vk܄3~W밨! }mȒ- }l 품2daK]mz7}?Z0M_5o 8r I?dEmZAQlLZaj&;\}v}GebS̵|G;uz`tߐ;,L΂FrUq _Yu/i ǿReu^F(P bQSZkcK r,DDꥦ%Z|(2)g3roX@?ߕB[jp`WTPon! Y}vax+Ӯno(*h ^z ;Ms3N IP8BoY؞##|ly{{b V(%|spͺp&F XЀ"Jcg[1,Y!"=AZ3E4^JJ8NJ7)b  ΰq]:#gbc8|%/)Np>G*MA a4ͯƧUĜWG;2(Bd4ǰ&w&?vCCKB0rmFN52'*X,#uRG3l^s1~,?U㶎4D'A"9|+Tb '_6WqסaV5K;Udiz.f줻Z]jwh6*vp3m6d5IrA6ӤhY jA{L*.r=4w[jzn -K~" Foy^FrvS5/v;2EPljJ*&#. Ҡ( \!ioĔ^}}>C]up;`<ǝ[ T%O2$`IQ%BTJ YKtz K;ūru7cO!eyJ!.蝊 4X 8ae;H_⩯Z;Ϋ!z}$CwJnwLKtT!PDTqCt3H hKaٞxI~_pĽHuJ3)]JpArs\q:OZmvgI*IQN)93Z'G}Wz}u#Ϳw8=`k U cJ#68H_jS"^s)'a& {˶{ػ߆eD_Y7i=\R2GZ5LyY$a~n<)=+?]ߔ?߾թ7o)g"nƓ, i gߒlFnbo$jO/eS8%eQU,Y)#Vșη_7gz%,~@tX` gM'nVrOeS#B(yC:xoKU5>efE5O.|g:ٻ*my6F`{]:wF==`7fw<" ѯ>g i+oWu!A0IQθe9.rq_q*^Ί{Wp9+}^ X.<gb&"c`鷓4(rG=(Nd&#QHYu2LwZ Da4zl5JDt|}98ky{!j lAkXiDe<(@m Sg+9O_Aٯ 5`Ʒw"&Ʒ ($n'aOI8E(Nn&`WEMα9Fr[,.ChITlG m_A"\dx([;쩢'U AqJuCe#I} TN6 !s4(*go4Pd *J"^d)ha7VL;dJMpO ts6]vZw*\N{\[.9m{Q!RTf r&uT2sS>%D3%>qHP)MwH>ZnJ.4xd:P ZS;-$QplB Oc*5(K)YʕIR5[3pM_]idr.rh5mnb1=w_9 O ry/>8t M0I+9H*)5"Ju%N88FƐ:>zI9Т"J9t|@:#|yܷ?,GtX@Q*9BcD9 #=(1}*(Fz!A0+ a9a1%;z,w ;Yo\Zd3 %xP:mf)Nt:n לr9s3gz3(r|}^{sk07PnT̝c2g\;NsA1U#o9u/b2A>@&K& ƚa`N [%D' ¢ν2IǺY0G))aAro/Q%f]Vhe̦Mf=6Xg^|{$FKɥoo7wqb, pI  n%HZIv]w<%[=jyӭu,Zs LEfGTIU*VX!wmR v!DH5QhS(* MeG| 걤*zNn~Y;bb,Y"Yyz,&Y_.fRגWfPR\9d#q+Ph@ jj%L($JζqO9LRu98ՠ EW^!KQ鉯w {B)Hm.هU|d]UrZidt*rU [b7QQ*W7 F6_F:WL\I<kPdK^Q&>e֋goOao4;qmd{>~ç?$މS=ju2}9Czm` da%ȉ@ @)(U2`T2:򭔖"Պ4S3eBa8Lbl 8ϓq<_e4fe3 }U ^g|&y/﹌˼5OXƓ?N/rb;QeHCD #d:Z`VSS۬gNe)۠ *C4 $؋lx7(qk6=L|l`Dz]mF={x{ 1h 4p8Zmy %T|RIV'oJ:s"#d 5rM̚RUvPcǑb*9:hr>UPGq*D>v%"LD838ekO&5h,vwPoɱWDCt,@m2b ȱqhthkS5 WLX]uڛJp2'6&^T.\^oJ}Yɮ\hg.\'PjR:y*Дjm>EI+s% 1֪+3Dz]yFc?IbZ,SIps㞲xF~|o㪲lsӺw1/`te.DL 0y!''JI$B2ܲ-> KHP#U_ކDJRJ$3uϬ;y]mq^/G(kz y_2<& ;_Urʰ+e@*R-,j9fʁl)K`Z7H{lRצ.އ7|ou$iZQ #W|x]'.%MrM28&+c˧lw凼g˞c{ˮAe̽9vS7wu\7W4cSRpK_bd-mԼcEX`<} 2nJ7yz8[jm!1ƖXe+/=m(v ;.R2ƪKLiج93d+>bdTưIҹ9O܂?OPC;0{ a{P̻u{~HΐI fR["5XƠ2F}gQ}5)3w1NMR[ɒ?fb&' |5ѸةJDCr9">$ :ZHYu&E,cb7{̹tnF\U)l$%cC@kkZ-[G`c Re?=>vATSהj2zSvJ5[3*$F'נZU_&e)+97߲P>p3FyzŒU.WjǬ5De<Ri+-Fkc hZ*wWS[#gWl@)4-TB,ICҢѤ ncc 歡Μ{ns% ѻ^&/u z_OsQk}\\d'=9'??~U]}`V:ϲ߫ /~ ʨU{=r0ezzRaerEg=97=c[hݓcF4T㊍m5tsyw~7>w-^aCwui}humy5ub=+zuqz`O?.i'Z4ނ$ͳ[-7R/^3/ߞs aGҡȵ\Z?y\JV7pEʢpłG+K zee≯:D\-񉔻Qb_\Ӓ܄9!WuDұ,i(S:^/)G]{~Q%j?cMema R˗ir71*Ry`YoЇƘ2Җ2,?P lԨg|mk̵Ag˃ Ҙ/M~g'7Qˡ&K eatdlc0tr1_*:˪*6_² J1_xz;/X\%F>?Z|^Y.k? p5G,UwOgWS |_(SZyAI{\3E\ SbMWv쌫]^k'=H0i D.^p%j+Q9 qOGbXM4{d $ tPg\ h =yW"ةnp%r=+QKEeCÕ DJ>6\+Q̸:\Y*PGbnp%r+Qp}8Dqurd)dAu7ޕ jg羫Cĕ'KD:VrEs3WʩY1Nu1 0>z.4(d*P9e K4,Dm >yZ'ۭjJ^yD,`^| Q;dIQ9ghSH$;z[ff\+wϢwTp?Yr`=Lmn0fO08uM\Wdv+E.\ǎ+V锛qu2  vv+\Z+Q9W_W(|pnp%rM7~3WV+{b Dƻ+Q̌ĕh(t+DJ{WrkW+/˨PGb`\7 Dw%*iWR @?kOhgO:~6  ۴ζ.Z̔fA&K4w[a&5zި6Gh`ZӀwO(StƂ' t&ǶGgϝI` d\~EsSǕ3UgчgY)\ L&7Wje"aŒ]?L!`/9*WV۩JTqu2JJWFiᆱaj:D3W?|PZ7I kg+ss.8>9'fZG[ǫbs'_6F!:Z}NlGƠ',G5P?~|6t:kNY__,ǫN껇n.?Efg+ZAC# 4bQE|ݶo,HUFK{q҃cFgdɺ5px_O͹p iVE8USI{k%xN4dhc9\2'%d >}oaJ>,QڎMѵ(%b'ԴIE[RHHH I? ҋHch#*K|S.RHUO! !kmMrzE5`1zҼXt):Q} ]&|J'{X$*쐒cfC (Xg]!(dGhOM$I_KͲ#oG0L3R/3ޅ*1 rJg'`b |ZVag]K!Pm\qLOJ$[PyU벫ԕA[4V>}t/%>8>wy4S U[е 2"(QwPҧx9(P"Q _C݅Zc`8Bk ) 1E`JI%PKcEX A%DEWHPl5C펁5$BjF,F,Z{65^ PMȚB\=v"(Τ$1SQ] C VA;8*jV,*&a!dE(3A68FtblWNX)j^ /hVI{H$Q5$YI$e(m@VӥGoU4^"`!-AmU($`F}+~M-)K ` ڪ6<cs|ۼXtٜXx1|j72I6 n=CnNN44z63 Pp$JHu6tk*%8O)'ޮ Ę@9[&v|4#''' t&% xKd]ж+9]PnDV]qnQD{=D+YLw]zʠUbF =>A( =`-iÖ Vow`E^QH"NdviP'7 h!g0Eʟ&oQ^1bp0) ,*1"@jt01x\:%X:W (]IڈJ5(Z54)hc37 3 E5iփ*M3| R5zfҼLFALPha3BBvZx)Gq{P׮BTt5у7[A[FsE D>N:jNàe+J v'zʤ蹐=?Д n F5>dN֞ +JOQ!,iJ ]\1Fn5 q!X7i̦rvi J.ȎYxЬP >$Ct.Xz꤀%a4QAygjp N}TQԋAۥyֆAԒf0`X "oGqzs Ω6pِsJRܮ=jWhe+W_-_j?˫զ] țT_ޭ Rv9Z8{_o}vcX?mq.iHa%|I%-b9ϯw!?0zoBqO]MOWzݾٜx!.v+r6v+ENnF)ڭt:v+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭn늡s[#|#JnOJˇ=I 8Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[GmNv+3:i'lV@ۭ`S[pVlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[w[d c*/lV@ۭ[=EUTγ݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[Miؿ׿ϋg/WZ0chwinEzӆ9غ0.aD'7ǠgDW&"v.tEh}tEpC ]') ҕڹ9{igCWW5'cu RbFtEgv0F]R:]ʣ])!ͭڞ/iܵ bV5b͚ϬjUGv8 .+*>ݳOoyelkU+ W`i>7̰|_6a-7[5mޯ :|7oIu46gz+KCWM F:DoPBPɶHG ^x=z:],[W2`==w,mY{fЋg\o%b;H9HL3k1֫oq>yƈh BOP'kACs( ݷltpj~LW֊os3QxRtjY/z=Gz6tVP=ez:tEۯD=#`MEW׫'2bICWZiU<h3"xg ' t銶psRW,U ]\BWS+BY]=Ev]SQWlAABi5+G ϩ$a>nZ{/}<]Jm ]yB]χ͓AQS+BiSdp~i]<{Dc<Oc.m{ 5WWsZǏZ֕eIvHAkb}RE4ϛ]7qҰ!lL/5Bq>&d)7qj5#:u~0E! :>)f]G =W<`/7l{3tФeQ9]nFec؈u?UWd@Ό>Z!#ӦM"fW6v<6oҶ>v;ƫzC"vWˣwW'?.ʉ?]"_|`^i#̀'h Lmu#>_c#|Z~g'u</_rzijC8-g4~;w?:Mz|*neΏ}p4k|޿ :ЊߝyWu5М swiMt€׽޴gh{ή'v8_zZ{wܟw &ѿv?[]-ן^t^W˺K5ex[|[JA.>6:y}_V{~?6/WW뷫᧌|Ol>;qǿ6y {K}xaMEV]of}7?yzyuюGg{Wi?K|?e'NpI.d@Fn)ji&}+C/%Yn54dMVXU4y#ͤy>F3if}w엃2]fHK~^$Af*WʟDUa ~꒮KKFm6%a&4O!1Agn `'cOb}!>QJCv2f"4Btcy&F1Zl fUz&5g&gMܠs _Sk^wM,y1Ҿ:uy +s<ݮ=ݎ.%[l^8-/W•5\^ʜC-;t[++mlFn[5.N}%N7Á  5mm?7\lb8įƪ 0Qq8 #P DckVߩJpϙW8A0A2Emtp t4E+U *@(IÐީf1NGU4(Gbv{)+*?dtS&sߌ.EM'Fy[oȠǓѺr`N{$2'}4(t S˱˨:gEȭ)v㯣j&n ;\̛_G(?TUMqWCo uHMM* m1tɫ)xmm(3(@'mG>Udz,M4EejG͚zk!Y%J7ubD:Z7[/ZhNȪ8]|MAE!LfEZbrq:Hm"SyJl<џy q#ߪ)t] YXs(ƌ2kzJߠuBm뫇DW^}" g^NS~;O'dDymq^q򾜆;LRS'iydNQ>tP(KwՊrGu7v' 7جq7L+LLPSkh 6obeLTY5롹Q7VM0 T%O2$`IQ%B:ϥ@Ro8+3M^lDש?X1v)It7qm@YhV3.^`z)H"SƘ碥;ʽ/ V?f`Thϵam.2僈ީmPIc})^ZFΑ~GK+]ϣtiėKjG?{Ҳ\txr,F07QQOqZ֧e(:RhShYBx7!`LN1Di(O  XP aDL&B:p/E Oc*5(K)| )5^#zy582Q200h6q]JW[ܺl?EͧMgeE33orEMZ@Ih'aiE5`$H#Oh;ʹgcρ(Η:;  &~YJa6Q514 9|+.gLqZ(,# y?B Ϋ#}6xC\" %xF\/"cƂ@cJud9G17REM(bR,Re(yMDPhh5>[ld_zHTCIr͊i,7a>T&HbTh#>{lHT co"Цv9SUV $ "a]܀ND1u,Ý_S`wa ʊN[lB\:$UL.n4+r=L]7wVYVQGX\gۑL\xpMHY}iD/_^NƗˏr*DJ9N /^˕I_͇6 @V܂ 3&15z2͚7UmuG[CD.ŝ}P涞[sd V|7Vݽ5|rx"{twCQg7ˋ-lDQţ``IGrn068=)Pq[ zˆX=Rz4zb,gFd[&Y76( Ʒӷoz囯Ϸ_\뛟\kL뿿~ :."A'SX?]wwM{k5W}=]= ߇Jl6Ww߿(%l:H- Z Į [߯&Vk[2UW,D8 t N*xD@؟\ODX0 i 2 񠘶\EG"A댩Pȳ虒22YO)NyDPd#,xL nJ裆5e4B2:뜔Nokw9tY܁gV]wukYkK̆P|Yco=8 0IQθeZnq6k|\ӾMn޲pG݅NPZ]jB$ Osd 1IB)Gqi뀌bD B*X1c"}=6M:"zƈVRƽJFBKaAkXrӈ`yd. PdWʧD!gb⊁n4Ϊh[[cR;ĬW逰;>Gc.*5X@UӺX;Ӫ MFMskTkL_ p>܆Ҽ߇\"V߆I.ǻ>}mY+Fu=ѾI-sd]y m̓N0%='ñZ_KhgHL%6Tz& WIWhRR*pE&/^iRW/-*oLva*2߁8Q ~$<\I>͗OV o%8-$/7^fVۯ*ՍlnEAl\;٤([|1^MVsfum}U|@ Vc}В:RH#\rAo4kGj:Rjsfȵ1Og&zGgm\H>]-q³RS)ܾ:⚎~',lϊ3)i?Zi7z[RcsL)`?׎\PLGHpH^fHIWΑd`IIzEXbI4pL:iWzʥ1 _YȅjylUk[F27bve"66 3DSl&Q2ojbXa[AvU]yY_{>a3E}J'e9xQ[.BՃOSFW\ZQVvs{Vx&Ʋ1 dSMH3gń^gm ![PuuyrO3gq5o/KT>pƂM[lѧx̀i|9#_=bx~jM{Ӽd_\ɸH.n h<;_U#1576SH}swSNR=d[.>.'}P&g?at1Oլ$-qfbs"2}د`i>%#.|:PPtHmLAFϔ DHDH\Q"$"M2(V)r$[jJ)]] J:L:*UuW֝;UrvOZEݻ?Ml!OBpl{zῪռh B-{ZPA#}!n|\JRR&2~Jd1 zSxP.ɡ-U і\gHrrz~1QO>{n$LQ提-Z?)J}`jÒ%g*RІ z>)ou=:܊ܬfаޠ;OQ:ڃP 8ZcSkvs]jFԥD\c6x#u *Sn.vyB#f29%GW{Ókswƻ}Fn'Jq\VdW+ U(ZI=5B0s57Ţ(%[LQQ,3j7(75M{%uv4sJA)GJ ߔpQ9bb@QNkjBj bkJyM-n(c]=Em.SY7+0q^rVsS.Bк;vqjKH͘Y%5sƿ)ui&⍐R8zBblE1vtl%b4܈{c[c?͠m;mEzfpL/<̽5EEƟ~^f7YN5S?cyN/ίIƤS@Xx%Dc,tN 5OPQvj1Ͼg=+,vjq;KʩkrL-$ʕbr -xrRūl;vbotM5͡jN{m8"2JT)Rɮ4jjQaK9sCtKoĒ%sf w{̵reVT_*;@֝,TjB❉7w|_U 8֢0tUT%}ܕr b=֨8o;%x>NӜ%?5ͫWbn t}SOخ;/i~bzv9(7L Zg߈57eL^/K+*Cl,x58hAԍQJ@!n:QI=tc/̬QRF }6Gfzy89ٽ\ͼ>œ^musKWQ̗a:!=Gg\Mů <3s u낶,%v#gqbͰ3Ꚍ=˖<ށ?NJJ5hkAkW0)){jadA1>s͗IE=7Tu3(RyB?K\^n?vy5yt3{}7օt~4Uxt$K^_(~}_'U[L† ]@_8"+U[in=q{tǨTc QŇ~`qdp4`r]nnm*I#'71[G[MJ" @}*(\ ?=~93ʁkʀ)vH|_Qn{Ǣm\>'!EFVtwLdryꑢkE;yls-rEu[&8{cF-KHlU*qTGvB=H5֠]iIq.>~yO~;( )4'ukr,6إz \;38ߣ}]D[1>eT؃>_KMs}/LV%OQqwXkNȅn[TjpT`l}qC}Zc89:5n~8 7.Yv,W0>##:!aK6yd \xx qŽ'хE7ej}̼ :Zl5yH 78~?9Kp⸱bGM438`<).[y+2eXam+$r@p5 f:  \Atί~԰2 ^!\]W\:yZkaW00ؕ \ k!\B LW\C+6Jm5UL\r0p9@ka%n{W*$p@{Wx0bpXap57z=pꇋ܃=_I-X|b`~8/2cg/ vog=oOޟ_MާrqM#c .?w2oʭ}p>?v 3c:?~9wtzj]e^W'WzѼ6ߤ&ZҡE̐ `} ;涍,_6 3w+Ud*erQ (kB I)Ns@J-a"G+%Cw/Ć_{[{X<;j:G.,6 zœ}uoқϖCG 3*쩆S,如kFu 0Mp=H)e~?:FJHW|>7.BZD"Jߏ=]C"HZ8\p*ֶ~,'~LEJe]=628HWla+uJҩ.J/$[FW%]!(Sʺ:@=#]$\hjg]uPWS2ҕT5.Bf%Q]vYWԕD`+f+µeT(κ꠮mƩf+ĵMW;j"ȺꎮAjFR)vE iq )ANJ LSQղ8 Sϫ`>.;>Ɣ̓Ƽ< 3h>˶i36mcQh [h\`Hp<*'R4>"zwJQǮRT-LPb@˱FBcaw1TW^X `\ 7{W,{qK_3,* cL_kB1VF\H_-ljr6}FX^ۭv/wt3jijͥut34BٳiCƙ(`sU#8\ղhMK6(]bI:BW:걡J+n}fn\tE]RQ68(RZ)ptZJEWD}"J+:+HW3 iAuƆ @IW G>X."ڶ(}ꢮ 4]!1ڮO+%;we!(i qAi "1]!*몃rF;g ltA@|(]jh\6h;unD$1|>3c̀&`(\ -/Ekq6q&BW&걡!銀[fv*֛uEןYWѕR$eԊx."ZSR"\>+Re]uQWN%#]!00z3HG]k/]m[EQi:+9D7h1;ʣn(`E`+tf4@ZODgmD~R>X@lL6erӌ4pg*uMgicfJ_GߏB=8qm͜ MEYW c+ JFW]m[0Q[gQthf'']!W]m[}(m]uQWK`M(`|tEMhH]WDr3tmO+mhG+LmlwQten=cNK/g)%`'|l>a/N֛ՏoWw7?O/~yt*aʼn_#a߽+OjqT X}u9R/d4eZ{_U=(Hp~@T=͎i!I.jh2x:^Q-O*||[}{!4nFsUˋ%%=dһR}_%~҉Hu ^ԓx1Yɣq V˟@틻n¢;muE_􎙡_XTQA*65zm(ε֬Jqkŧ/p 0M/ )A.Yq\l ]\tE!y]9k]]k!NQG+6ڽSyreIiMz97Sܟ*&!mJ(tuMrvSJ>@Df)jddpdݙU: <"RT N F퓄gO_F[]-뮤=?=dRꏋޫpoczv[&X/z/ΑSͧY7ʂ*sH;,UbholAµl KvE>׮+0tRy3H|jWD{Qx(S["!8r"{F$U\_lZ~zN.{6Gybx^4UwQo^Зq̢>+̈\^lT: ?A/oWdE'iba֗'/c @yo@-Ju{ cι \{dKl,I}jВZ`q$r7R{8\`D)o/)~~;i@Zd9M'u.\L,ռZ`_iV%jy]^ΦJAj|7K5šr>G\RG&lY.GhbŻ~Xcy?YaQ0.War\ ;r֨sUs=5q} )4(~ăY.znsm?ݪv9 {\{7jώl8:F7}IH#WŰ,-gn|p΄U_.ź6zE fjZ%ȭ10Hf^A{i „ϪId/g5%6yeڹAzBІ{ 4Wt3Z=rI #AჭbU?_/]watC)Bn1rIgUfn! DW9=OUy\V/u^[~>G>ǻcM#Hx#i}%7 e>rx溰?̮\=zXJ,$^OYġ҇ U{H`;½֘+{teC0 V˟zk|W:FQS9Ui"bO;DrBo-}qhMopw̳y *p{?,x,ZӣAp|L`JkTq5NAGLyJ_멻=A8r@k0">n䦼 cL)Y5?ͣ"bXi1Z;w.tqB3yM(j7[r]z7y/w8( o 4^uCJaeArxNYs~X?r7f{2ށj|ZmLeI܌Þe1# YdKtrϽȒS1'MHF(B4QL#]AL YY2ӦYȣstwTs^,QDbLۼ,8lCvʘ(*D£winidfD2syqMʚؖuںl %#t=t"6ĭ+{yq1BP&Zb|CCU7ѲlΞHd!K=ӂ'tA0ՔĒWׁS}y!ֿny4l6FJblyddeM ٚaffM' vNiG3ʋuK8I52b9Ԇ< "%h~Mkj<_Ayj y6Ϟ8UZKrWqт^MXڬ>=W=b#ǜAXȻOC~b /H37S: !@߿;_,W/鳱Vowj3[qחas[Arۣ?A Ƽƣji>};Nny&@B/Dܺ~M іyQFl&f?v-eš&vODZ67geeԉ/L!3ԼW1R(.O>90b0/& gac:h_CH4LFp7l=3? xu2zYﭚ}\ͪ-V|CiWdo?ïFG&~-;Է[^y7[ћ^._GS-EOv4,bg~桋.y(hRlDCH_q{G2| J>Xp5Z\{b90FzMد3F<0fC\ti/I KX2bd"ax*Ytk.(1;R}\&q_wνWA6W۬J.| (*܄'ػ+}=FUA{y[`Y%$eVkI)R'4PC%-@, $8PƣjGn‘uuף\{ƃ ,cƸ~ϓѤ<#־:襥{:y΃t55Ɯ\JcQAs.(kKd:P':u"0ֹaHnZZ/ܸ^"/$,c)bjɑae;9鹦5p VsN?%Or쪻Ӄ]ͥʠM*/&!II){:F(SPu-BRE9 R¤#5B!^5.xK gV2~357BHH~N?:[cmR_kr 4j4J#N8^k뤂[cfD@z(Cudm+kڎ&kab(+֞6sDb[L3GOa*K6986c2NJ9gOl۷O]ilPǢg7v~Be!o;>( ' fz)lVwU\klx]?Zv1ErOPَܳO{i]MOnN!| Q@\l@e.jz#;3-lI>vƒv瓇t=؁@87Vtl.ӥwݗYdj㝊?N TFNfSJ(T)҆U^.Q&Ⱦ\{k8iY)xDe~ .eCn_N1 5 TheBϗ..fP-׳ϗәUHvq+ MR@faiS'/-Oώ+&')+-}CXv.`" ηnvR$)ME3 ^WQŭ#)4u]OgK8l`}*lj;͛]zrŞwjjfcTW%f/Ŗ}~BrZ~jc@t "n{Co^NH҉ 1&+"A/'ҍOO'tJHWq;Ys"h #i}*K* 9̅G7S$dO=QoՔ}_n-RCasKmS)|۪WFqم8t{ͦ.!qBHg"",ő4v.ǬU`|q[ui뛻o`w0/0\עV@XIYc-tHJTiJzl3 nd= }֐/x>v@8յ{Tnma2V^RdvXR(#Jq. RԈJ)(uRi0X+ i9Jr+/G r_)W+ļLEd6TTH$,qJ"Umv`[<&z$TPc* 02/u/aJ vǎ(x>5rHԜR̐2`UD;,w@iϿ[hL1 n_z/E|A%*eIv_#q>&GB|T "T H̳!g7QLqlJr+/Iѝx]HK,U>۞c-BvӣKe8IzaRE,^EԘJIj* %xrc;'Cw) x%Sr+hZᡍ~Csj'C۟_chg:'ӣ$h_:{ZzDFivD3E_pJ.8Ht_%kmF ?iA%j*R#oY?iZ{}Α:$"?{WJnn#Y<$˗lF[+KT%AR t}C;<[*U$+Ps!l6t~G202~J:dZzo]de"@)ļ b$D`SFm1+kxfUƯ7ɈgkHV/!1hCWQ>Z؁ɰ`Ug4@d-gP@/JrHDLC 5m:: B]iU$RҮfŴMba(Y2b3-} ]1jn+p^٦\*M=Jw`5?0-}bky>axު^A AbNҨvRVNKMl#:J?)QqJ Ujv8_n}֞4 L3UL[DG6hDƣlg,4+ 1a2SZ֣ǤU!Di5(i SAWQc x$Jų435;G蔂?O{U^g!=ٍQKwX-;v-} q'9k>bsV-} )/PR|dwV;r8НVqvd C=Ldp$#JI|qg!TnrI,Gg0`F-XFBP:<լ@ adB`i*c}|}O*KY-%Qdoc\R)NnZp :aFLPۡrjYZjJ 4tDݵ9T[RccAO PxS,RFULzv=,̤LSpJeNOn7a*Х̔.ј>Ηw0ε'"IƓZ6眗)>TFt P:eY> h"Y~DA7`RZVQqsw˟@s  y*r5D5eNmJh*l%乄'$<"?GC\1"\~n2 gZێKJC$ 2;i d\N*3v͛*d[7IJ dI?HtBk_?zw@+KV1v~{0ćd8 s9$nh _ r(:@WbUӔ41zUA4q.=E 6#sLKݗէi@TlFF-rbõ iȃYV[>7)G%'+ƄE5C`1IjaAw/:xC\:TRU}Od>[B`B9@s|̫\Q^Eշۉ.0lǡ6nuYӜYX ii)_ze?QN$WŪTRD:% "I*NUg%fv6G*5֯iՁ`(()r8#9R|ҌB!!ɦåu-z WXcsv.yiҜGRBI }pk->|A7};4ՀJ3%0*ӊ[ݘ4 髏-f65lϞ7quL1˄FE\^ yA ҬHj~ ,b2|in 1oQY7E QԨPy ~lcJˆ<"AgS4c*NMb-&4ɑ.*wEXdΨ V-_6;T`eKBdd%+FxjS"m^GϢv It/s QAIEKػZZ%nxDַ"}͞ǜmqU #*_UPQJkos:ˀ'ǁW@jZ^S8mv/gvc uDBB~.es.i^K4"#DUKV Q2R*^)Y6g)M9`B\8G «D&C&D8~j="Y^xyX-=;b&%^s>Tn8 =.Ƴʂk_Mx!fNjdKkٞX##S+б F |v#MǽBF˵$2E2ǓlV(|I.2M{˖Mvù9W @ VA^lzvLuzrw3poSpOܘmL߆8S h g*{jvWJ_Nye&ÄD,cP^kica_%zVDVPcڡ+DR(I6G 1B")ƪ-r0 JMm~{Tku(_Udm!2M0[v,J(f)HBQyH xo|o98Jv(cp!ä{.tcє{X5=U*(gu>~ '1uK#/䤴EH*_Ii6_Q4C,`2M\w1F/嘳9Dl쪟S.?ى6%{\n8 ~em3N!RD,}v鲲$ZrcXlCG=虙s{-u+!QIdKZ"%ؼ -S 'sEIinv Eang֣pPP_icK5L. Ih 7Ǘ%/URFXDa\-~T3r6PhY{ 8.m; yÛʃ4* |ℂB~ rX^RϱUx ]:DZ-} Xsw[Z[yyHM3 (\ ǒZs>z;DZcy'E1 'DDSdxA%\/ <HDL}>6峷xKZR bk5l::n Ϸ 5)s?'R4 OqۑGkK.ǑehSFNi5eQ<$Jd]9kdX˿9F4_]J:ɳGѡTQ aTe) rDdDVn>/XѺ|nH(%%9TzjM͕t'#n 4h,2—wζzMejk0As1zV731ӧ1L(9guFf}* jOk3r4l;mv~dO=i*4G_+f\NJz'ܕnV(e㾑i)z!=0ͪ4?~\=A9CF*5;&[cXdʟJ家bw`E=NnHs %c΍1˜f1n0mX[!RO |3jm_ǥ"QIYY饤7! ~.`p!1#{N3`YGm[c3 FĨE2EjeSUs ̳0_k~|ɲț(tW6ާp5 ?X K-FMԨ뱬O'%f7on9{Yd_e/nnPnu?ͪVH\_Vf-i˦3Kwia~|ìӅqMٴhEa.1 zex)/or^\ƫee+D"UYB\6`Nms{]&ڜ"T-ƭkN6p8LZ{&n%h?GIm7yZ~)7)So S=ݶ~9 U^ͮGq2^ސ& [ ̀yROGCOj#:^߶vY~6M(3 t; 0FM3ăV+ԭH nu?q&E4|1{EiE~-xpo7.(HTr ~_JP =-i΀h]lZIP 0ݖ,DG d+2M%z*WO..6]cpA-}ﱛlKWT}p ?:)NzMwՕӾpHĠ6nv0ryejAjtBU^k{t5<@7u{aߐR3_Vק,q|amQ&D)mg TKPpolQsd&GkЦۃvng֣pB#*g%FS1/-{mAWXCx!`R2o0H=z w +n lw"!F-"Ϝfy`HlC:{ JTJygK8Ei~K8r~V(~HEQôaOcށ֊N>Q*4Ñ0z}O+{xK[!Yi%mN"fÕbJIo͹:8)OV vh-6_Fmʡd((7)li%럆V a<6z{!S{e *u pB;f;I$ :/G $$(QtfϟeP6e; BBJHU,:(J윖>7fmブSes.i^]PSEk3ib4*{rrd q4{0ꐛ_olX7D8vE`98_Щgm- Vt5KIM/Fx/w!}EoyTX?Ki Ha5 E[n#k {нj'9]p@uvPX^! D(|h<{MfkYnl${/6X sJrkS+8 3ԔV> ι97v趆5< }i1oL~B)Psc R?YôV2ɁkN9ZVzuu|n.Szt.QZH-0n[E01Av!{  I %87H`׸D9<\reZ":¶r$_(\m穨h`E ;iIV ~CB3Xؽ!#KuީY]0MKc Tur_9cl^Ǽ_vffcY%u.vRK&r>tNU,N$(J$b0[ΑGJSwnL!gWq^[4{m<0]qvWL6!rX X _ ԚO$4U\UAEW72,etdp9 QԎB#s)lËmx`6fy9b+B5T!AVICQT.0lۘ~/U_VD3Z@Pp"lc6_ EK$H}zi: :ta`Ͻ0-~gn{h/PGi`eoAuр/3{\T~\P8;XD%Sx_ey8,F*<<Ā-4.9a== [h\.Y[ clH]L t\y+:K gh-4xIo>@y6 ]C;7oV"ܣu3j?Ff?XĔ0[:>ڱ%Q[QzHm fɔ+yW:F2Ol$L}ow؏c RAfK}DfH>ݬ Y0N\GQl&)sv(miSw9RF$ߢ"7F^ &pBkRTjh+2#DN>elʬRIY-7fƚ+mB6:(K/$=ռoIŴ396Q`噃h aȷ7G[L1U 3&!" Ch%ּv{x!n{KmkU!$:`W,";np߯_ E4F9WT6=vCBq3cr_2SG6 &h"v{$l|MѢwfd]<\GjM955,y\R8RqJs?c?>BDHє@|~˛ȭ&xvdQĶ34m0&/^##VFdϊ,xI*ezL]!p~պvZ oz)6 |M5[ћgb'GIP^V;Q=5רΑRhmP7z{dc03gVeIjMxUr)+5B>(a]}x\!;WJ'KɭEfAiHQvwCDxՈMߟѣ[H i7$zblƜmeVKY#l[&O RiVb#X0%cz4лq2+s&h*)TKkENR˂p[pBc+!񗝁 ITN̔'23`TT(Q!qCc0k\1[pNtDe!$c[Pu;!Cw7uUk*{YѪBΖ%?/.UE gS9h3Ɖء:}}p+SD1@:d,!Ֆ+#E-4rES9l۟/ :Ȳ&Ԣo=AUjFB 8BYQeKa?rGh6cL]8Gr k._GMHJas+$goKcB͓`YC mha~q!P {P"lD ٦s底t42)]8Fߴи1887BPTd!sbJ Bp4p8JR~:]8kgq~_-gU% \ r!$[N;QLיL r\[h|7畈32:06nq,8/C1΅:|@G($=#dÔ"۪ak8}wZ( ɘuFG1XjTʂID@C‚®ixIP ODi.EjٜB^Z+1|yzͲԐWXu%DIÆOm.ܒ bx(+ yt5׃6 cPNVKaA $nbH|Ӄ/>84Rh"ǀƠ w\pthޣqkFh-R7̔\h>X@%o`*ywnBut.ѠYjQҝ^VM)U` ?ԩR{n""'%الКޘR%S=W ftbJSBm%2f# C,G8?O\ g8ips(5* Taƭ]4yÙ}\ pg8ռ-t<͟^f1"2pCqcbYLCK7JW`o Ny17(!뗔2NGR w6J!KPr!55 ~eJZjS"#<+I '?/4^whlO:\N 1Qfm.=v֮qWˁ\u $ylnBNB:6q;B)˷({GIZH~ٌ|3S[D %T$qq <;gѢ6:jj~e:y,)#x-j%GOQā>>wJkJyKqm dӨ%$}z{a-f Dʘj6ShQe;jr$.VmuB'rk`?±Jitǰ_9vXn"̰C DO8pQbJ}/iJ]lVG-!l1sֺ,^O^LԶ $jJhgVD<4OOQN &\CjXm,}Luz4O" JdB]1fNjMs[Ǫb j^ iol|Q-N5Ii}*T# jUjTj,a7OF*q>895 Ķ$ uFLd]O;s#;秡9B||AvKʿވqU'zC9zg lԑP d@YVn}37 u^P.0 'ौHt3=1xYDT8Ȯ?rǩuiX,S"Xhk1!7NJfi:To@'XĝU[ktwjDSID(UB"lY()-N$dÁZn۰sqnS%mAANsc\Θ`1-'5nhw KB$)frbjy !2c~tS~3)u3$K'jTr^LW iF;%ה `ͨm#ь$hyj1YS?RN6-p)? :c- Xk"}?Qm-k]F\_Ԁ.y+oP5]>FY6No7Q <ggp-/Uy5q,4B)ʲ+)ęfDLQbSYC-4mH=1mNFYA<2[UwlN]XwH)ZׅeVۭ(*DQQ,Dm K=4XH7͘~.zwZC {1/,>s8"2 u%jҵv%jeaB +1D?QFg1i,%Wxw|ʥ̄@\KlL}dKnp&êyHV>4|&·#O铈X;<ӱ'k`B1rU4b2F3J4N dBJBa%]lidLGؑBscpf99`h X^R*FEfzfom b=&< cQ/Qa.G۷30dnMӃ{} g ,2龳$\JAI)$sdx*P4 "1tCìuEj +џ#R09L%Oj U 1^![cF{O);dHd),2;KXMk1<|,*$K )*+Qf"UVPB= (fCJas-y}!tb` ~*îvp!-FV. ǷhdM$j+Q 6?/a:7/K%%sN;JB#5ծШg00'aJ SShx_M;Մ2DD)s8x%AQZ-U]@T !ٟ7yB[P yΘZ7tRlJa' ]apnXQ03Y\?*s; r9jw\vUw ş~r9[zgj$y1+@OS!ſwgfoղpŲO/enOjXuo0~d؈^"16Zdv0Q^0*J vBQ0ק#GF6]b11c[@bΒ=ҩ|P5Tw;2#L xSjn޴n ޟ3Asi" F%pF+ 4 1bU!4 64x&OmA}H=ڄ !}(pLJauD/+Z7`Uΐ7ֵ#afF7ݔzGT]0##&M-=f+#m/u`PkY!S0tm}1 qoiqc +O3ދq!j+ Njpr]oW|݉~{mk Xd˒)/X9j[6yxh`W -_y;5I;il/; xO`k0XV*FP5a>1޾+]ehnxg=F|yvI_{ ҈MNp?LM`΁xP.$d'O+ &m3mf!!C*o٫G,j@N#57!x=oϿZ Ⴖdv9IaAIHv-W5/c}Tx2 Mr!z!7ː?Mt/R@FH}/Wk wq}KY_ _]PKLSgwLm0 8#Pϡ_';4 S[ 8&i$A Q˒\ZO6(rI:\qHB!K9A/si=¡` O>GAV`QJI a0)4%FghlhDRb/BmgiG@erǷC;i|21Sx?ġ 46@I)s~\xVv㴋$1hW+;y|^*Y%d]/?B0q` H˺vd/g5¾}roC*f`*<'_7`8W Kg#lR.]Lv]~v]kjj:qв"%0 IeԆQUKm*L -Yij[ I p3 3PK+M ~O٢>ٛfA/h"+PG?Sqgx9u!BN9eH2;qQKi mXR+BZ7)H+\QdUg= B.D3uqBp96d@vqlNwtnvF(_;֨X:}BĺNSs;NM構0n=Rae]kWڕ9" /0 Y0ktI*TəB}qYZAU IK(J%~a!=C7 RԵ##< D2ԦTp v1ǁ!?"S.[IRn7o?=㝞r]ˤ'=&y2A6J>48jI(X3rA\c+EҾiF,`"+L/9xK3LƎ8r_OnnUWJ)cCg[fΆš\TOҜ|Zd)m Iǝ ?ƙ*RmX|\ob=mO 8G_oWb3ZLEe wd>3I!K7 [.8W#~F0'}5JnTp/o܆BYCX"!-5T\ ȵa%.:72+6+ReKYk5EJ?@@hLXe *+n~Ĕ¼628V\\彞Bh|-r)EfkF PTEkd4ThS7[Q?gfWbɺ|?fa 5!x#eq#ޢzHlIS2r0rU]Ë8lh|Idnľww\͍]ZQQb+ qp5zߘ%p3ue+#2vgiG/PY٢@B JZk*\=*ZJ)̓ijE(8Dp8_."8C(Յ Y!pRTZpk!& y9D+0‰xDc@(0ꐃn|ntYxFkjZƬt) 46ƶ] ^J _jmÏPCkñ>Y[ #+ + ,S_ ck=8eoEpI*YBFW%@ˬ 0j;e^V sIV(D , s̡JrCĜʽ3bga"p,8j0.ChkzBDg\&:x"T'C~B?9!_oCvҕŢOa!t J3(pv[`)guY,sw Kt(?j!UX7/ٶɷ2^dS9psx|4KJLF*L~qʺgn7["C,_#%.HYڷ ٷׄSTh]QXG"p!kFԴ܏ z,e"۰SJ腀Dۃ\uk@x+ P^){ ?ۣB V-j&tm {ya|umK H)1&@74Ň\]_%T?K5;J\3 wHX<9$.)10i6}HX47QwOY= Ƽ$Jv^ɘukq6PD>{/:cۦ%2⫿^8#㒯\ 3pƧp4Ke@b =LRKA!Xӟ 51g߿^Vz3]_3Β8wNoΘա>Dua|3؛ACZ MM97.iln:B3Ћ- yȮipW3SK\ :;{zhg]X!`ϔy#zH)١3bP ׵V.#1VWJ&3za1 }1 @: *sڃknR=hEdz1'˱y)-yf$,(`P6O!lcXw=몂nf e!5`*+{< !q2m{uLN wZ,RZv,*hR5[^Gn;|,NŶ@0nSW`G8Rقt\F|<J&Eƅ*8קI@ҥ}p0d"$nʄh(G> ;%a\/@Nk34e77JMg,*F_3 9(r`j]lx#3֌$E?_;zF ֐ű<uXmr,9-ٯG?7AK*y+[Dxۆǫ]/嶣tRݢ"tGX&yMδ"^;~osrΐfy [ScҲ7䴙?٭A{JIj;5͵CXi^o Zs2&Q^+?_mY n6pJ(+D_vQqw?0c5|}+{ڔXc.Ps:&k-LATϗG7m$ \q) i+笚\ MmI?<҆ Zbĉ*eGz*zKY`fkuJ!Q{m~BJ.*Xےǣj0z:9TcE* נ kI_:f `89%e]”33A-][uDria.u釗qPkpkeu.ĢBr u~>! t&|VpB]4k UfBZfMcèQ1/;Ww#vۣ 2hJ Ea`׍wSAXu6GQaеkF $u$l QW2NQ’y)>y]& =nLq^9V+5W$ $tFS1F;;ckJK#HbҢP 6VS L UbA 0VaMe+dMKpdcrjdK%x\hIq~߆dcI.*ِΔw2R@*Bj:hK&4VXm<3PF"(u] V -1i>V\bk+UIUIqa%iI7#Țcw>U^9Э_W RQ'#E@f($pk)֭1xϟt)Fm io e]}Gò5c:>IĠoc ?B-Ϫ3O] %7OmOE>4U-" bT":~St.5ifrxY9|8 O&BX`erubpLn:; ~s澆r&NlƋKSʶ&V xU D++.ReQM$ Z NւFWߝ2o#ۨyJWoʋ{@ 6yAXg҃⟒M[pp\2뒃]dO[Ё&Z͍τOꨌwc`.Cb\$[cHHӍQ13( NcHV<տe`̯f/Ao4)*:!o}tU+S$]g+<>U -N'YBsU{UGCԪ/*Y <& Y߅$P|t1 :#v̞̝|7}s֔m֞e9_\vVGޞ_h{S<_^v>5~Y qyv0 pG`DA~h40qPzKGzy.,`wn^:C;ؽw17)K΃:lp69p P˽_r=qv>_9C2>qSF@ a1SyRN6/m= n t4w.YTz9#~֤+'KizMO#VmSKQ=q䠧]N*59ȔaYr0,ҔTC3F:dJ]Mk2C{囂{vL>3T&,hق`ཷ8W~-fv ؚ"zP_dkޫd2԰w L,m!etbT5 fx2@+8m>I]: I~vx?+J 5-,YTvIC%D j}p9XrOAd{9س^1t` #,V,؆Mh/FٻLh 4jbe"zT[Y^@7ӚjUW6Dks" $%R^6Tt5xA&4( pQSn>`\rIֽ]R=LعkXX8oSb@:TG8:WhaH L}Xmu6 /A/mpأ g|pf ;5Xi"OA؋D5Ϋ 5zǺ(ZZxkD=]Dm *V)$]&3 @A*0kjZVY|.˴b,P1} J;8iŴb##>ފ-GxiQZ켽=i)ۅr $¨5> +DgeaHD@YehzЂHJ [1MP"9pHF!35 a`rUnjU>zE@sM_X5# IAV0 d' L_%3Q!@BS^bccV2d$5[ß[iJJ'&#ZPZ`0^c,V*#i}ׯ ]\WQ_~ k1OmȖ@hT ؽ7ɆEhg´ī/F$L&cPSgB``h&hU\-Uui9'ݯSe\N}v_aH0%4Ì 8r*^Y뺕\C-YEЉ0fBnioܜz7F`^b<*ʪ@Ii^K=%Z gwuqWuuvT=yң;V4lk0QotrI}]C~WvMc~i2ۺA$m۟Unx9䖔Oc;U?Ehv joBVɝco`_]fb8^0_ D"Vhڰk˩RI386!g%L)7e_QY-I:]|M:ݡŚ$ RIO: G}j~)yof;vaK.GS-=-^o'荳2混zӢ]_") ˹{vlb |B~o@ixYmonէK\/.~| O/p=g`xygI& n?;R/,$=>; ^vCu/T~xc`V镯-_lrwMQ<}xTuJOgijˬYV4"Cc&ڗ޿E5EjcAVN]<-dz/.w}wZb=INp }Fv:s'^ګqlk#KnUIZ[VуK*&oFI3Vpӻs[:jwMuJSH)U?F(#^itFcT;9gRKi~It.y rg\>廕Z F5풍?7v``!LQ>aZS,]xP߂:V@'mQt~[h~|U;ԟNG/bAF/WsF=bpb[Td9K9U`zQkfcZξ; 򫐲~ ږ)HYYi4t+]}\-#쏏lZ%O3;iD ̶RȽOJ {?5 k_Xo{!#w|T4$V1/S5kk@8qr\i*]upA(pͮ-vr>%5D1!yg2[.xI"*NCgɹgMs&x >}S]q%N:89iqTRbWbIFaL玳0iZ ^] Y9kc's#zso<}U2':Iz'{N!|@U~P dioDŘ+.., D[k\ #Nws^^0(2>*O}x m+N\ !"+ ]vA<i17|ͨkF-_3R#h>s2 E铧Fƌb‡0oቼ&C {#rJhdQNT_{|h䓯|ƑCáQrFd ?Z뺞8D։j~t6!%cq"r1v]Aܔp/wbi/YBHv̾SkJPή-?޵G_g /<8yרID}MD@g/d KdFHД}[@Ůfw.C_ɦ,c1›dL^5o<2&@=O"b\PEgΕ8ŗČui 횾14}ʲ:NZY4ae1sR0A QJ^oߜZtpOlFl{ytzhnCW370]I].ͥ/Jw7#So[mfzV=8fjϣemtAVz$|l[9,/SyaJVzSj 6yy*HicDgྙ8O"hʾEoiv8i^7ΝȾc B6nv0j0%Och>ۿ(?m|*@T#@Q*}, FYErkoѭeu @^T F7f0eFHA]qCZ>t*s%]_I |7 8'P15nk~e!FjG"%uF: ˰j+?jd3Ӡ$QV2e屔 l,cr&n z97 ڸ$w=aӵRSثv0_~hB_WN{D}]q ;p!7ꁺvxSAP_dw0^5E*&(2W3j'2RK6{ꩄ-V8Rimy<">{'׏6wCgﶆ 8E,G['B( YM$]]t=yoFhFSك^wD`(j(ضkV"I|JF[Zz%tVh =n7- ޛbwM[7 ! lS%`$(f։݁,ҫ+m}8v-A+U矜.bzCe,mt¹FhpfAN/ O6.w;Tӛ Ln&[K*ePSi1RqTT}U-jUК;* qP9.dv67JE *aRTQڱِP~ ^JB﹢P)`MV~SAn>2p͎Y&a9kĝ1 f 7`vYB Kl/19+2'¨> x&(ZQz¾EpRc|ubK?0@}JMwn6 Jn( qYva0 {zRQzl'F!P-FT$c}K{_mzVb]!l>槼#Y!;2 )^fH`u @lHzb-+P2C D*$}UC|P<rƠq2Oץ)cS)}ۣ <$[ebM9xݝת!æt͠%}S |2k9fپցgOZ!.T4F5jMͥ AFE2pb(uŖj*]]:Lj σpsS=V8.(+7ѴU|K!a˿`y$ybdܧ35n Lyv;![əh0vT,\mvSsRH kVFž[m譡 Ggmda:-Mem 9"ÍtrSU{N ;Ak+n=ٻɝ6p(r dUy6YjU OǨb=sg{Y<_܃]}Ty(8*YvCɳU1(q "1JU@98S5^]Vd]/K wXSUⱅ>W"JTt`!=PcB+sh+25|2Ҳ5mXs͠% OY9~ FT款݈ԏY9<~6@;>REs0 2|6ϫc !OX]iQ&ZL=+8 CgRQ H-\7w_J4>[sOTks]b nWT\~׈R3syZ&MW(M1+Ql8IW>bsHVWJK$!^rWjTeR&y"{]⭬%g1%yYw^T8=ถ|xxny{?tX,Ѳ9K`Xu?rty`amf{ޜ33#8kMQ4ڸIPiVk 9;¾Ek)3 XǝKvc@Z<) z r̡9i5Ik .4 QM=Q&"\ܣ_vBb2}s۾ʞq1#>Cl*۾1xeH⽊ U~tѝc 6BK{VTI FB, UE܅hj6F_\RR"0TreBaȩy;;98[]Oy+ҝ%7}sZ9K]m.5py;9{/,:w~UXCšR-0Xi:{ֲ6qӌ1ɪ1ڬNSPbk̓JxœDp2.t|߷ |wRx7z;+Gh3 e)b-!ܽ<;BڪvgƘPX?JK.J e"%b83ݯ ӭw[:%6o*,~O:~uH'#/tGW"̠㘔fWF+iMs}B p1XGfFi %=2Z30R69X+CɄnb?Y2A,Y }Vb:U:s5uΒ3} AdQuO)hڢLb\kiHtgq푨p5k: JaUgb!^yO%K B"do'!fȓ9!#9MA9a%rGOzU~AOWP(+}ܓ1qƄ1$x# ? Y0LFn9Nt;tSjt3" z}_V$ z]s" Zu{m&3]o# ώ,c'jyHHDv F|:tۇ R0QTvG=:oi&(ϳmt '1D:GcH*qG=ؠfϓTXN9;͜ %`A o( ~rh2?'hgn<%Cܠ=M~hGBSzv.e: vhlV6ڠ7]N`P3ػk!Ԍkn2 uF#c WfBj} DOlEop>;A-涆g*| J9^$3L[jtSS,9viOZl|SWQy,d$K.e),,}UQ`+pR IjH\c"}v5̒w6a'Uz@QL:U/OqzսGΜ(s֔Z굏:r X]9!V FCΪR۪Ś}VƹB$i,Z+-٥l֦}"]>bU>6P IK]vpHcDB<(k+;9 A dѥ\6i~CCTJn=FWINڕ*Yy&;fأQw?f0zJϓΙ [7NY%iGҊia+?8o_~.EJ)ȫe_WdhIit)aVMSYh7*Zɿ,H6#U!H^$f^sWYySf̓IU<.wfyɴ#ø!YB8?bwv !ܯ]<O*HsEP $izB@=?{b75.:/8-( inG% %esF}s3*ѝ:% eT ~,OA( p'I)9&HUhɍ\7w#e9:eZ=\Q}mlڭ-e>KxhuaHHbDRKYgq58N*:NRɊ F_#LBNwZy2RL5r=*dWȤDVuX(jK!{{\)ayAR5,-/5B%j_p9g{1OѣG%L)Jɾ~{1$ɭHқ+-UVKFfwReDgAd^G{xa/{<_{t`Rk A2.2xݤ;NDsC~S3.H}z}`87fL\e=U2֠ d Ff\k/NBmŝC!Gw^Q Y4U2vvc^^|Ƈ˧㷡 ߣ{Gjy1m8.5A)Bzb(0ˬ jJ+EɤY9|J[JYPF5޺6L12q[.S4I. eF\mj.ٳQ0=;T?ߜ Q3D:9՝o#U BoR<&0G\BT - 'o`NS֤)4n%"iK6}N 0nyU Y덅 6;?Ipuq=|GZrhu%8B'u-esCsVJ亦괇jܝI(mE{0ࣲ/DQkx\00 :[YrD[܆@Z<-y2bL:vќDR1hh}dI JI]WIV_P_+NNvhۭqȟe74!?RZwWwz'q}!\>\]v͛3(̀Ĉё* @G龉8WrV[7an{QG= v8.5΢#" b['a E@dISjp f/Y):x@a:'k<;x$km+IES쑻 a0c%l,U֌,iEyO5u;$)^Ƥ؆$OWuUu]l x976#^#}͆69měiO1*a8 ޘ7ܠ)<*w ̣Ҟ$>bG-' {Yb5xjzvV"sLCRQCAtdР;/HO&CjC۪)(JWԭV%Vjґ)+ ]U>!ܚIٵ|FAB!";BƹR>wWeݻte Hh&Lz჎ <)3wAՖm) 8RzZK#/1K KETmQlrXw%EJ.8Pb}?~6fShSܑp%Wd6cn菹a[6uJf Dͺ9ʅh+6EVT8l@k/شF0j/)7\%MTvM*u&ܤr#E}TR;'Uj~FYVWrn,Ɂ$Fa9vw8:soqUOR1WR? 1j=~S<{*{vpOT|  qYC#ȈqvmI>R@~ jœ P\=Bn !,% TؤbTuR LBz߼EeH̆\ Q;T hbX3'Sva5<jg`œT G1VYC2nrZS)Q;F0Oqn! djw  œy P;Vt ]vF!RvGNLpx0O1OKwm2n9DL녝ej;w&"aԁN3n%{펝ڙ5N)1 !T՘]&vˁ/ \eRTgC>5.|b735S"v]ª(SLCeԓ?ewYL./w)8ۓ͟~//&&_f@#4UP2ݤ9?TTzyHbRY@6u0p&г[8LD*3zRL&M٧D͘P0/L3^} xޒ(e]Jk]"m08RiB1L.M=J&GN.KU =r^<rRe{9#ޭ}[P+}]UU"GlmE E 1sutoPU{ۗmI) tu 䫺CpU׮Rr'&7 m'Bؼ ްx`VC{?L՗w E2"`[.vH;ʶC@0Tuqm) ajx l:0تt7|1'EC_IA.E5vnpUY:Rt j m3Vچ5J)VNt;)7!'*Ս ۪2C li:4W^R1׶8n%EW;Pu)ƶR:זmel}45Ή[3W}F>Ƶf5tus+V,uDFWwfwxYl۟aj8|Z kCclsL!bߴIδZ& 7[iDnKo`f.33h[); Y`(H﵍H}()~}h2@/<־@\QUЅY@ ~#omuqNU_I%<(:Z z!{7dc-N38o{_*+7zG7-;c=y  oB﹑ws6BG/'KYw%HQVd'K`\*,,Kh#fWB7A8{A->s6W}'p5>7^tohM4USǁd|ԏjMI|&(mUt3Wn+?晫zQ=Cn lGʶYgdj= 2(o+$)E1DC&S\ 'B>jNOQL )nҳ&+"`TsQ~>-#˩7Cm)x(qc$?3UaC( aȁ+0:rm-&jD3| =Dq II  0yUYrl5>V֋ 6HUa3D6g1-x"L+H >ђt<MĚ&e^5vE4{; U$ǡUixRfe}9IZT`yahӖuXGV I =6V'HuMU'de낇~ݚ~]mɿ|Y25wül_lrx~cMymM릺.S8mbYTuf"Ѱd6NJP41-7C/ɜ{،f泙ػw6&e$ð._[C7ǏS?Srwss_y}wGuK h/PFp2;,58\zI%@DFU0Ȗ|\P-3SI橋n17a^t*s f&P= ,Zۤ,/lW*;j^\w좗gh:B# ۻ~|6x@f~VEYL665v uu,*@§?} zOk[ƩIF\޽35W'z>Ω rrSMN=Zdw[5xb :'pT'q:']ٻx #8ȩLY$;DՁ]..V/K _$iz{6brg38.P;j}r8 FޠvlVYS=̮]Į9%B;yx9̮C_pzwǮm.8Dw²";wexI &˷Wfr4iNf8@ZDrLW@9G8b.lE 32M7aLS+ y)Q8yfy9O.LSC_ejw $T,~CDlv r@;35)Q;H3žjSSAp|Zfv+pCz\MQΛT|R$1:3dgfIpQ5HML VŽ:&6!}yWx\٪wCʱ\]L{|xܹ>O~'ޙ/˫lɮ $Ԙ-],74^(25;[1pQ6I -ɷ 5D8s6_]%1v[V-}23NOy ct֛JOVI.vZf_izyjd_z~}LxxW6ƴ;#h& [cJ}meݕ"EYؒ+ZC6%U mEk3`סn+IZ={WAwmՍCm0۱1l:DOb%i]h- ziE_Rw~~'E Iek{>u%;UGU`z x߇%1{,qز2ͯ XvMiv_X )V/-vr x ag\fb;WSiH:yAJ$?^82m|]Mfg97s,]M.>_l7/";~Si f,9r~~jĹQ;f|>h>y(ʕIEljwg-|6}%X}Qqb]U _Wlj.HK?/`݆J?~Tw-N`~vVF]4C~5 Q_mo͌H˻l{fIyo=F5zk+TJwXumu>#Ԣvi +}7<+B7 8Gdo>~M7c/<{cR.\=m `$+cO>y'QracE!?оy?1Nl}.+/8g 3u gj7sb0,\rxvej7o1BvGN8*K(6㡾\ 0ߗ+3evτ2;SPNώQUTPbn c'vSd]fvK3A*,̅sgR Xu>כn EL펜v @!rcLVB_h'ebz#oe2,gƝ][oG+_nZKUw=6gX;g1W1%*$% OF4$Et`$19U3pWo&oFR̲|߈;OBړoٔi qx6 9+į3e6b]DH|\{e8p0>1N_L2,+Q?=]]yN1~}9N[)e|9Z'Z,)ڈGD_h0oc mMawOʫ xk2B6a]vF'hbP3F)6$Eh#jjgua<)XD$?\ p>mRBuJ}u@Sk;OLM5&2s6T|dV;uO2pJ  ,RI:rmꞕ5xa:KQ,Q։v(3(3k([jԱDjs@1DY{;Hֲ? c8|akN?` ]{я f^`j7tr!` np00J\(˶JY: `TF?|?ʉ~{.>,rKf (R{fH ԩ)"JPҩx/ ?s rzoܺlaˉ/G31q="2LrаD./5)}iKg)6L4'*H0C+ʔ iCV12 r>:@o nf6:+KzCo G]ih붘-{bΡ%!2B.ޔݔn7}. :^1PEs"{XY\`U Ss1;KVLfrX7˫dy/ܚn|,,y|rY^{-B냽dU9T $hI번QEbY, ۋb\k]o.JWomWV Lx%xz=SynwQk2sEW뎹-i +/Hg$m28ξ6cyJwI;|x\|Ӥ|b֡5Zio&WP}0g"KlY#Ŗnx'<Ͻ͉F\io w 1mq1;(:[9 Q~= >!4`|b2, 0^0Wa|Sr8=gq'oPA) (9/|չB4FdV؆MfWXɁO'l]"eN@+OZ&232W׬iED5t-6+Ƽ,WjDϧ8ò$"Ι;uw4gLrKa=SUz.jވPn+]~Zm>d:_db, IdAt, bgәּqy妚G3~%~|myc^-_,$rk8hAX@W]׀dT5BD?y>/yIՒH<#ɲD|k+vu+@~X$dAw V޿[?Ln*/: T}{tYp>ޡGOS z :GQ{*FEй4o4:ڞQqMmX/ܴ|% blgaƖ1~ :z:Qvf3h,`yo\f'*bF[ɺJܛ?9ˍL x[ᅰ{A&l6OOV`zPg<`CesbR/f≅2,X?#<1.E:οG Qf狟HI'(N;$鷳,lõjGM.}ǟHF=l#JF{p`_>R̼թ`ag2-mOpW1WH3XvL;% (BGX؁p=l* ucjhb>o7z%f9<5oF8LRK(Yk58j^ Hjx???+gIyPFlAXfVNB)Oj 8D%9[U ؒDy^PI*Ȋ̄!lp:"c9+goYuH`jG;D(.]󵽮Av/p O!dkSivf'}:V(xs>ޓ}zvr:ұ'6 C w"@KKmgr 6DmՊmt _.gb~ӓ{0OeE0DʢI*)}iKgZtU9k2>IRBULmԖ`2sĒz4#kC[b8l,Hnux?}TPIIiz ~9Ka(Uwʲ}i% 4WJ[UOGlUF[x UR>9Sf@X~JW oE/&ñPAM$)2w{@JLK@ƭO(Ž|Xp ^Q-KkVR$/2/_y7[\-Ii:}hgٷOI6]dM ?Uy#Qirz~8h`5%o!oEшဿU'AQ2cѢo k&29ֆQOcH`~B: i5Kofһ% '# =vxKhG`cw.?}a| '}Z]J| +/U GQ ĐhX+%Ύ7lτML,R}*x#g25u5V]H<¦eR '<] kyl)G u1tהL@ӔQ@$X<ڤY n]COu;^? 误`by'0/4";ɡR'Gq(6љF -h~\Iچ  ^oMPOa:bBZFm;.RO@ŋ"U٢‰@lȖZ$ !-K M>vvov_h {,b>VNv>!ڑ`rX}wѧxw*ŊzŊZŊb+1q%BȰa#;avxHv,SEKc}t20HmoU==koeu N1_킗%H^eK^':%jNh@Cȉd@;^A[TuJ h@?"<׾`cL/ <.dLƼYu/@{EBeog}[4:dbDɊ(E!w{Q<͊dB D$v̲Ҕ"g.(lQ"PbiAHuY@ZXu.Q2/22<mXMvYp)SB?{WƑ_> dȮ ;B6ǔ)IyݔĨؖ>~.+|Rd@(<qjH<>7\; Ń&>e-@\ySj8;qb zcV}T*-)1,}\Kd1|Uͨ˲3SA4)ݳ?8btҼ})O6=kSIfYưŔ䨱ݙLqLؑEW[ 0-(·`.+fH=jg&, ;Bb NҚԎnחQ#te y^%_z3/f=E=q*ӋX+Qr pib\lI}MyRէ\ z/M"zKĕ% 0[GοRqb^|#G<-f$YYGj(z]ҒZPdq*JdB גɂsc&HTQuιT :& 6UUT@HT@ρQ{#f)UhU '=ډ@K饥1qC; Vo#匠z_D̓B P*db,Ȍc(08vKّ{4w54Vx.Q\3T( Pg0"D%"1 {z6@Z``Z VzvQ'>FT׫j61RZp7wl wj95NBqOVzgRNu2NVb ej+ ldVg6Z2*\ݧ,V -_t HGjZ'yr`YnY%( 㧌ͯA pxo=x\8?xBHlb xswy[x:*ÚlWa>*SUT6t,Tt 7 i/yq 7?|k$: kYr} )[F}|^8;q UkÉ::ߛTzj⚹m- ܬl f˼ g87;rZdzNuuRBA}gA;~:X>R@|寊"ĿDc4']eY }{G2j28_/S':*0W:xU|RNjJ:Qq: ۅeK`Ƃ27-[(=LD!.9z/Q)X>}RAg9D7QWc3RVyWÔ6.e]$ʏm bO-*V}S/>Le^L"5bJ32@jxC҈{oKL&W?Ay'(2/2j%f,)d1̑X/Nn(NYjRZ)U,Pjh˷vN[~{iֆdro?3ƛf7y5{MˊQJXzEi \%KGװѯ=|Gq\rI~=?T$ճAV-g= *F?Ϸ7)ut4֕>bmV| >(YsaFTlUb\' f2JK)ii_ :CPe譡VByuyY?x`(W.:9w2RPCk^Iv_o~_zcl{Vri ULB3P< OsrBJyq+8JyTm~|82)J)2c B/=fXiEQi֊OmtF&VPOZo$1$4y~/Zg[кɌ('^%(YKJ%DX1iI$9E!,"X[XY̔&J)it²V \\"p]U2?ҹR<9آ)6s҇֔"u4Fs } V` ZۜZF(0U`#HeD Cg{P(\=1FZb7VXU$]ɥBJS[`wqt=~ >(wrN7`#;b8Nd [$3 v$@]Ia;}; pNhf .)?ݑrƌOU:hrD#`LBs @ĞIhBCIidXA8L(4)n@X"Kڂ%RGچ7f\F۠ $0b "dS)x:DH$7c[قut6dds" K F= JjDtQFFR%MgR [E(!\Id l-na{r Qj=S .Ȉ0৏HFFFY=DFlEFӓpG%1=dH#Yy$ 4 e2GNY"oH)]@`ƀ^!1:6E>/,% F E L'<-*OkݡF"${U}5+"+iuUXwTپR\bFKAYNQTdQHh!Q]jل(<;aZr96p]SD*@Ɩj/Y"f@U n*OoOzp\߿&X|?]0-}JQ`!(F\{{7ڿ·( tQ_gl{&sU% p`lD9|IM3U ƍgj*6$Ϟh*N9yVY bE-cmA7]UN$VwTci gw _뢁ejyh^+\& *Ǭ @0ȄB E]r;!Ui6K/]Tɖ+}m_TJrX 'B6P2+Y޿#2O6T߃P ;W4g0N#1@E<ª& +k17> +M$vFX@+JL ˩)u\cJIʱX#Մz^%`"`0A`5 sSXKo₇Q @q) ͒NA IT|C&Ї ZC܂`nZvB[9hKjLT"lLb4^ G(DO{(  Qz** e  iN VԷDG7#Y>p1aoIjs'.A>$(:0DPƀУLnwY]>!Zda[0[PQ )S*p=Hqy=mP`Nd!c*D(=H1 88,5tCE,敵t܃AGmlECVd4($6A}r8jCQ@ _D$(HVQ̺Ѫܚk{[f'Py{t Ԉn"BWqN$R8F-˜ׂ40vmDȰnH#FN2|Zy8%VQ9YzRn??pVJ =3~L:ȻLno>?'A'ps '-b?=yיgSxnm܇!NO[9=Qwzӝ?6hȢ@` r(؛tv I[ɉnI(ʛEgt7mNqHRr-NHL8H=_x>(I~Q\fL7Wk>Gr㙠\tZP/#znXV75e!q:4zTcNI%E<^U*Eʘxk4[X [ D v9хE(\adڃ/O˯ 0b}u>˂z;3]cs]c+xswy[xx,Χlz>_9;W;7%}X{3L |9\5t<<-]nh[ἶFk4eGOil\ի;_w".finݨQ5fMACOӾEh@H]{Տc>_>H9ޑV`}d2^˷}V]E QshD<Ӝpމ8yGwcBh ُ HiH=[SAJjRY#P(R\33AJq̹T ::&uB([XI6ϚJpe1FA6ZA{ #"UDbˉL_j XjfS.1;h='Q;;$jvc}AQ}H6Zpm?uEPe jLA )PN*dSǤh!1,NPe^Ak ! *$ɮEꬴ}(X5~z<1`Tȣ`5Q2Nư%+~ 30}g8?`yAoË`nߖaנ,H oC&*F`Y o09@{A2'A9(D)L\&,WtmbG[6kn=!<b%kA-)m1Jըw~~R F.cRIwV]8 0ZqkuvA; " +@qNgJ6_aC*Rc+R]^j*v4h]JׄPOj { cs*P7Fc/+}qZz&lJ1(('B*QX@+Va(A )U^1^IXd7)sXIR+LA'5v7 ,6PA$}&ΡԼmبh F,.6&לyH*MŃ8xԳ !FęhgGQi};A1ku| 61>1v!FvM q1eRayBA Yaפ ''zvoC,VX$~<`]6&@ط& /k҆܄pBM;jDkK;}䪟Mg׀IHX6A]"JwT;울7 Qʭ`gybH~&m<5q:O]?O ;.u!Lda4m1^D :ZĻ+v:g#`%IrE5CP}Vݳ/QDI@l8ؔW[C.人O@zo6gLkWvHW6ۤ*ߴ2⡍ժ?jj/ )~~ViNhFc|hk 0G2_mbmc|J)hjFAեwU^0%;xYR᝕\J' W_"T Ulv!AT)?Wmx<GRWD1qu8??jnfC՛%+)/}0fOz?e]"^CJ5=]prac?fʰ"zO\@%w)IOq1~_G%a 9mCnNj|v驊s5y.y mwAvl_l?ZZwǃLG=깧 2"t'/e5-Z;3@#mAד8|*>#Kg>'zM_ b ՈbMLAaUTRu('@E`'0"tAC|:Dt50J|+G0%Yj')ic(*han"qeN̡-D6HG+׵^ƪ2>fEb?>/=eC_SYWgUʧ?pR'IrsV)^#'|-!HB5S\iv;6,Ǻɽ\Kl"C8Qbg'vK^ d[r@v5N_BH8*InL1 T#Tv>^*5\X\y.z KU|lٴ6ek\ȔQ!v5c) L9f@~{`%>3izǤ%źMEn6Tx dKRVO[ #|Ivpx: ̔KP a@*4B@uI`ɋhUls1{V#̬=&Njktz.?/:I/820{ d&>85e"R-v3X9i+zi$IF.ׁ=nOQ- O|NN.=f=*`oFs^ 0u0 Q0)ؼp8Y΃y#01 ( S$ #(Qq@CFQL]p}\]ߎ`D4Q"44Jϓ:qVMd%CN *NԻP$)rTK?jUgөtmٳ]VoFW£CO+-=8e9agخdۭ f)Z](`6mU g ڎ"xs>VV7I .["ouΛcjZ$,Gh- $rW:C?kS cTɎX{TKx䵬X0&tB\pJQ@fN$ 43 NohEټh CƼ=IN:Aw!=\R՘>s;Gn'#Ċ>_iU.ZW8;nW9;Y~͆3v8ٱ}z ¤8$W7=roqXؽI9{xLJDXRw\!K@;g_L*T-͌ƾ=MoCe$'zD|a+o&;r:PvI9Tfّeّ8S$NmSTWXVh'*P::qRR]9,C5އPJ/C}0ǟϻF)ihD ` #ݖ$1',!걋{Jm@32{ň5e:bb]\vi{vhɩ{|7.^Ï8=%yğ3,)|pS.h|7CV:L&%q(y}=: RˊUVyI>kL/d5x<+NMgjpFixMr5ەfߝ*Ό핟TMQn/o9N8ɍ\lq\~)4y$ޕAeR6fKoESҵøT0UYej=׽.Y?=? i OKlb^2,^/ÝgE}y}߯>{T˳|qﳫэ=79^.>Q]g2:o/a6[FQ!eu ̆楋JXHsC炷%gfYhpw:+1 /W,KI/(,zu;јH>n斵•%㺴⢓l~(RBQݖӋG_Q޿:[qur!\|{a6*M7ͲLk . wzSlVh".ظy8 \PYdz<{kiF&\|~{woI_)hʺ`qX/QV\\^7Sbj1U^z,}E} n31*V ՏHR"H`b4&A$8!iOYe精nK| ˎСWdɟ_trQgͷDwsuLFM&@d׋ZX~8f=1\출B4'aT-pBǏ>~cQ ?Ǐc|_ȐC'4&)e,~DXyx??Sla,@`G T&du~1ioWYh~x( W'= -=e%T7lURJx90%#8 9 8C J 0 "U`k@#B EFK b AEIn;i 4JF)c( (1#0{\$QJ %*s Ie{m5RZkl>|;?ؿ~.'!OP*R NcNkm#9 /A.ml'8$]Ļɗ;˒Oz߯zHI×R@۰-pW*\Y4BVt-d>}0-_K0,$^6"'C¨B̢XOeU*4ȢʄĂI1.2AnF:%ľ͕ G0as V)/ieoTfot>^?x}>}#9bY`2D䵵y9Js.ٯ[X첽B] ZE~&- W0c 3Pδ0ZksaPuZ}0W\ 6^5eC >eWN,@O;TV*a+>$9 ϋ;g5zyCܑŠ5ΡnU0"j4:`K.<Y-)9RćpFbĜN(fg̸l-WS?%ml<) *[a0 EP gcdDH9EC𳀝TCtBFѢN'4N nDMn-w1ԀRQ7{1%IY.\6S17.睃QԬN4l! .q㽰$0L)0"}&F0c!$`U`\ Tj0NxAKk@CQ#ˢ N$IM; G8d伳с@/0Ct`xt+#9-<#y;JS>6JD3t\0d:Edχ E0#R:dc 7iEXg8{L@*8(k$Y&YJ$#ǃ񨒈bHe1:zc=ft2zKƿYC+@̑ 4AԸvFbP#ڔTMբcPtfeԻ5z"YK"uiǫJR{GO\ikFiHHA[ax?:8.=xTY\E1 ?}Y՚@.5FA0ۇ:4wojWz{>Ax5g]@scroϿ9t]s 3t 7G2;öO'}vŗO'?^]Y3̞wzMwT;0r/ݺ= ~J?Zsb%WLWckY9?/fqgG˩I3ep@"wQyF?gBX] 7>j̏˽\|0tvC&'utDcDFL՝±ն8QSZ<$fE^!ޘ5_혫u5t5G(_+*7BNAؐz|mku)3}C{Oh ;:'(FF' ٝP!66>aPٵ-7nt@u{s׳Mݫs:og\tFͤ]?f}"AP Ҳ HN< ;_gۚiph6+>_4u{'6:{T\?_}Ϥg`$m!ًK$SAحpWj8{ec->TW5酩-,0jń|!z \1|jE_Y-6Ǟo*oƕzfuV CQ43ϓ o!O-/'/ $Ug+'o"7ꛬȸZ[cyK$*yNɁIlrVGJ^2@ۼ00wY N9R%grJUQ@>Qd>b6$(5`8lCjreB3Z,'W.VY)yin4;-0ت_v;Cf1:D6U6U]zv bkT#"ѿ[0uvkkȚ(,=VΒ AYVe>OwВO 'GgwʾGB BIN!F(vx A?ڥ b mHE׋<'@V8#͜}!+Rx rOS̹)qKb"(3L߷Yks_B"ͽyrq`ȐQm}>|+qx]n.]ϋAQD4+18y 1O !qwNϽ;-C-~{w2dic=o6c1="m?InY$z QPĀA%gV7iøԉmP0IIhwObdϖV=jySiO6S폒Wqs{Tu[# ՔgJVt(f-ž]jaym zG:1cwV8t0`lŃӪͩi>]y}ABpdd`sA[2L6e@Z2*{#LH4}{R@~r{LK@"\ewۦr[gv+WlҮ6wLkzyz폤/x4mЏ7p 7r19 BGbp6~E ΀YJM_iټoij2kOb/z5yߖ2n$ϯ/JrjF\$~m832F+ݍz DCwmdN.0NHJ> ۴}{Cn݊yxL9&2T-.Lj:3U)13Fc4M5Qsc"%VɃpUL>rKAfS31tquOR윆2?k^-o4/A1A¨BL5|Τg"&Rb\9gt!HU%)m x Ƚ$DH>eTMQZJuP֌gGA)y%(De{Eq#9(N_4eOT1-y(.}:?kq<رU޵b]fogb1ؔ#mNm;ذ@nn>ڂM/Bc"KHfŨ]~@ [ ԸѮgY|@C$^MYcƤiQ[*N癢zhG7mIڲչ\d OTH)dI'$gF(,Z'\Y /~o;Nbj' 菓/'%<\,B3n;>)`Xh۞v|ΌRS$vJ;> |n_O22(!st Q`s̒4KHG.KCQn&~9jHpG"nNL4֎D7^e[(r%3Uۮƞ/c4m|b6EQ-_,ǜ+Kl]ONL2,@mtiZXoV" _V!,,ʁZS"/}ժX9 L} sxGvU:> m:ޑ1۞>1>e{3(1QwYe,p}r](RQ(}iz(v>u}o'qFX홇0Ukh=A8X>UdLqOlOp[Zq1j,{|(Y:`Yx]Bb06Ɲ4kEpګ*JGOdl> B+~Bxׯ=vffM&q nVWu#w|Ln_9B >.nfہ{a1WwS`|nonp{8bj 8=<1>T> Z&}ãVءSC+6_ZGFKp$S#Њ\ӣ\4B+Ո Ftz< %JhI+{{o -QcU|b.`<7N`|ȸֵB{-ۘr+eQVِ*N?%TZ,%|χ͙tl rᑔDהQ&2Ge!Dug;ߑ!M#u$ {)5ݑoFTR߬ 6h'wf"{ЬjZO};Ycp, yhpsʵ--ڂrP'hNAXA4߾dTH xL F2w` k.kb.ŢK9eHpDXȗkLJ$qk׽qzO5ܝYw:ik9EBaFJQY%bRZ s3~ 2kw(f1{m6q-=Ŀ{&Npr9/E qEX}ؽ2=lدo P~UG;ǫXoV_FWU|?\JsW7ۯ|΁Ն@uuH8DtvH)6["ͰeǰDŽ%l瀢C6]W$Bє?Cԓ$CB I;APM2>0%JPC/{q< v.E P\}D9kboNT@E$aE k,9C,*K!돗WbRc]K@'(&Ƽih9(JLpr-h OaƷw~gnjxongjy=[>}Pu2߬U)7NVE1^"BDq!Sf\'fym%zo*?~C.G)Gd@mʽƮ j8"CdYjzh$ʱ~E`X |Ux0;[=oʘ@dly0>} qw;[n/{pVux+뼺x٬^\-Ws/vB<9ćU:0 =ńMy[z'd:aiX@qQ!46o;>GqT=/p;&:<xu@Cz!I9Y0\)9U`#35-wd5 px+~8L)ofS\<=o⯙o ADP)JKeOU~'0[CɡG;)|G}`#8RE8G>ka∣βwۆ1ZHMȞ֚<z0 U2 ~ܰ.3U N#aJ䠬JqdT  YH π[Bꌵ!c"y;Pj7]a x7 /cY.0A#K`8ɤ3\"n{B-. X`< ČqgdpQQY fv;lqRK9$UH m`+l6_o)@ & gmӘ 1Rk w;n=1IiwM}F>$OQԼ!q+k QRu=JEOsB.z= S$L=10gkNw|L*}f ofqMܮ>ݶ3[w/ѬJU<4,u3k 3q|&}LnbXf1b)`BPisT`i 0"\%ĹΡ=5*x^omf|}93ߋn=g0[ʟ-iTm.]-6ޒ\ ,;<bVb.wWw7\m.<(|&~Q.)#?_ AKNG*ƞM]Y X lQk~rE9rS;)ɑ$ ;\RJŸ4 .|~|"pv#Qb$` Q-R`d2Av_뷬D6q6TPѫ'%C} y S)2NA .]IJL0s0!~^0E,NP1H`OA0A'FDXu -z޲E9SʴI_?ܜHڜI=F;'(gտ3^VֿӘ+9@UEeG#t+U"Ɔ1*BBR"#%o·rM&qgz`cZ*I3Lh/Bt4D?{F>mn2cA 1If`IY鑌7~d,d)j( 8֣YUbXUd N1l, Q=oWG 'v%h? as+P1ؼ`dwVDg(nζ)@OsӶ<)a l\` hu-˹i+骝S.Zz|^暯'Ttv;6r+)!$8&9P_&xf;*^< JI0JReBY$u>;`EHàf)K2vrFo2>RZ^eO{fܹ`U9r7o͈zc')ݷt4^`xs߾s׎hހ'\R/u:Dp0?+ OmpȌLEړ/ ox7uŢ C~4}Mo|QfDž>*irؙjѮ$)Hq"G6k5UcuӏX\yE+RjzyaE&IljoGV,\%\>VCw7_qy?᧞y3uDIDeLMhUaY"Ӎo!Jb$?fbQ\')yˬU  MzF-އ· !XZuj8w q6:Bzw5eyU;Xa;K.ή>S¢j9JL}>GoiDsuӒ^^,X_d( 0bKb@YBʼnu&tJZ\3+)K)ʁ bƄʐoj/NPωvOx?م6 SS}/@)RhI'npe2h+l9! kRy8ݝ-%,-"o^z}zӂe6<LN h$H)J M!s q*N  CI~\Yܥ'yb?ԕuFg>ԣOicԍt~;-ͽ1s'q>$ҵ,$׹Y1r=Cfh'K%Lƪ:Lyt/\Pvk# p_*ڱUn z -TeT't}[F(k̺3ZֺU!߹Ssdn1C+wml:;/ߦyƟ}/- 2hv[@0?SĿRpYqd ,?5_il7޺i&>Hh?Qi%B1o05mX9mR ˎ d*KP2t1Z8Qϕ,s*^ 8i_-zk&̟MΫ:̼Rܡ&L"C碣f7 "U֌=H}(M'yOx0!PI385 esoE ?s;GҧqL;Y{F͇?*q3,dewU0CP|k)P9]9Ҡg$DSgF׷%'MXu (:=C\g{n~wqBeuXChk:f0t5FC=ʮ](U+׀$&t㑨vK^,a.TF:%ETR嶃@ƊDPD &eP2#ol#Nt]YRPnd)K.sv)aSB.hv}p'M#fz]`}N>`e=YkRfܖQNFZq&+ uh"[Eyfu"q5\q}ko7F1j[񕺢b>vj(6]sE+#;'&9 Tx9S<Z9wPv߀;'slgNBΙJ9ijbLa;'`ǓjlK1Ѹ-=V*C9 l9iR#B-*Xvl[;}7ܞ aJaUmՂd<  )*8B`!T 5V,;w9:oCcR?j,Ay'1x/WpmDB+,Ic3}{5y.7`s.nΠ\P)jʷ2 ɒ ɮoWŸ!߭ F9EYq~\'p2%旗p ldgIù~"TzzjK*2mCv[cyLɪe[Pvk|H"_PU?xZ*s/A,eyǟCڹ"iwA:чX7 [n2(:ccZ 1܃Ƭ[0e[hN!r;Ч <[ ʨNX L` nnUhwEX]KṔ-$DZBc_}mQN::aB`2Xj^kjB6pBC.n)HH^o\ŠF bjP [%-X~^m@qjus9q^>p@LWVʲ+AB{{ox?_8X&zqkdһgX GCb" m@r8  h Y@R #8~(")ـ#2]C֑=׍'R`xvӿ@.%w6+FI S%zöahɍb,ӂl(gX,1XkB[nw!r0hzzi__Aݎ~=Ac;0ao0l&m2 ÄQxAJ @b 2%\k$ ! *R?{ճo~X`_J(pPjbi%6 oR&m !Z<ݷi-;#/M 74V0SgugH$URsIvʟ&SRgERCyʠm30nDn^m@0x7OB#j}a  !2Ps(Zjb+i<%رGe Cmd@RJSϿҐy,3i+ZNEma(k iIm*QJUE825àBӂSex2hsˑdNNG@RkoF'Omc2zxK~/֭p6ͦW||NG Bĺ^2BENTShGцeū͙t6L0y \.K}qXv닒:oߴzc'4hv@tFpc Fn7҇ i^Dha՚$&C9Mm^d7ވ[0ZԎ[{QZJ|%@'Z5-l0Lv p&% Fw۪OL'=UkƖZ1,Qs85wCbTyS3 sC6M^QEdkE6d(.+p^:5owLNݑT]~=P[^ǿ_ " 5N@?>?\,haq jB:sxLLsN'AE\4mP9'<އr:C wGu?3\)o1\;mQ/^en2C{\_-X:!c.j7N֯Jٻ?A_r|v極5TjQߚW]m[cb.<-evI/.÷IV ߱~~D֧-ewcEnTBB:֩kPi1hsKR?v? hfjfbO.7e`5w`j-)ho(QqyZ%ZB8UL9Ͳ0H Llkv: ηsm3[nl?n8sq5ؾOO7g,f,+pmğ`bNonu(ehFDFZ/7ʭ= 8de|#,ӯ~|w7/Z^]6f\n ݳ3k :2ڣ=$t ÅyipƔ θ:KIu\0$?EˮԨSʨ~wq~ W#pgOwz3;U|Ut?>+7ʒdo#ʘ֟w.}:NL||}R>|Ww|pIisd8P &du5YQUlsH:9ߟCzDXu֘V73QH5i͔*Ƥ@S21+|Iiv[?͏`~cBK6D:Q*YJ0R@P{T(d&mC;Jht4SgК;߬ZyYARG=G+(bH)r2.1J xRD]l@gFi}\S}tHЀTIQ]E{%q_*X4oFN\H.Q ,1ܢ o%cJmIA"[ShK )M^&㍱=$B4&mtlRX2b0hxݻ*gn#ꯃW5ʣ•4)Z*T $]C(tK|E+8mBI*@TGUPJ77L ΚD@'_'_M=PTƷQ:0-Ҙ̆}V(խjJ e-@AfHh ^ d^%V~R7@8Kh"uծJ<, G`,Wơڈ(DŽE T`lq8TˈL:6[W:X8E br 0ߗE8N곀֖ (Ɉ <Z_[HRQ8y%%Sʘ/b ~Jaa08. T"x*=@52*8d"p^⸰ai@q KH'eYNDJW< om  >O~:ҤHpJQD**a6YKa2) 'Q Oľ #n׍]@,!Z% 0"1w@ڧ4yIyH! %:.p /sh0'25k)`*H==ѓV nK)2vر-Pxk3vDŽBbFP,,[#ڷsPD pd.)063i*V@JPHƁI5a|rs]g&2PqtJN#0zRT$zq`P;-,Q[ZS1}@ QVy(Y-y`4h&p31m'尯?;HzR[2p ] [S|L*oIU$w*T@=} B0 3E3':RnC#뛶u2l{6v"+TOc N1֦{/ߤ HI:8%d:7B^w5l^}iR5W js|ݧOo;k87X>|v\ٓ-t]>ws~kԶK= mE/ZW[b/Z_[܋ז+͍-dY_ bO~GӞ-v^I_ /u<<9ލWlz[0]ړ9TK=^o= Q@mlO_aQn6;|Kܸ]ǽkA ~*8='H]_7K>7v~vlkAPz|7IoN~>|B\NOეeBkn׻E*ŏo>3/W:;̬n6Q]]ֺz.DHOU =?k,7qV.USCnt|ݓe+LpH4#?*СiM+}i bsWm4 T^2/WO0}R/WL 5YS5sD_\ I\ Z(4+M*p^D徎( ?2mc.;H_{JFU}Z?֦+p+GNjL1UJ}ۙKsf[PŻ+|W"/y}fpvg{28Zv!~VvefIK?)Vؿ`a$ VJýj޲7{՟-w^yt)zLtQgktQviejuDpZm^9*3,$nv1{}G_\]x-޸pmzeyd1j=nz߼2;;6谩5q 0~}oi~5#ak,lf'k#͇F?DŽ%Q!'$+Qc=+SGOdzW =z(\vyCD[Ŕ/~:#_mMBg{ݞo$'C) !_xnS޿k/ݶ&=n7}Vgݶn6D|!UL9./mMBg{ݞo_2k^G3wr? Gsgkz8Y'x8-N|qٰ M-R!Pbwz7Wu Zk+Kܹxuϫv6ܸي,W0j:VQz/\;@LO2?:3PԘ evuln1&ZOBCZ7X߫o8dܧ+<+3PfIA]AR:n]".w!hxv烸vxN2'>]ap: qCgE\P$QIbha\Yơ3 E'^Ƶ\nl[:<>Y\s6=uo{|ݴN>߼ ؃}` [;rM7%n׷Pҽi&w-ewZH5{b볕)IoxQ7{r4=t;}{avhNG>N/Λ]2#e.=ûo 8vA~X9iS:c8yȧ+w<[Gb9\6=9M9`$[ 0. F_◤Ħ]-;R[ ǻ77IhlK鉸9֒u:X3r<b0_`话&]d-fvx<;37H ; n/N~+ 5ޕZ 'f-l[J ".9 1oxg܉z_ft0Ÿ=5G]MЯLdjt<ӽ \-q;WLmfj_ W;3^y9ԭZ շsI>vK |N/+J?OOZ:D4~?(hsr_;)0r6~_ݷ71o7w<RT~?{{1G!>z.:rY@e҇׳r<`8Mp^9rLh$ &+hdNg I"f4!xҾ?LRxh4֮\r9&?/OWH~;7Uds: ”h4ONb*ozZ%/$*\6XuE|߉P>$5i/ɔmj%I :QZ-*jY|Wz N>gKVD3Mr%YxRf"d>hgV&a6jc`}Zl7HހZ&*WD>-Xe&gi UEKFo5*7a (9HT:גrH*4FߤnUS)|VZ6pPUէ`{AVF*BÛ +P,>#QB]$22VR)" #xl ,V5u!'EP@/mL#/)>;/uZs0Fh@[bDExaAlDzM| Um -D) r>:%q0 )PkTR*U$^G(Gs5&BCKns55J -Y7J :vE-P{D("MFoa]^GE%aBk֖J|BJMU@Q"-y-AӒ-F2I-sTQfH:4(U/{?a zoXIve^Pn/VJ5$Y.@YrEo-FcJ(mrnCt6)P%sSW uRS"䊜H}FgU>!.#yaRj b0QR6z@g=U^cJI=([*ՓdyM8>E-5k,R h'wR5VzV ^3 k"@|% /MDrA!N,*HL B`|*6#ѶUUU, =āH&d />JFh?h0|L ҤeWٕ|n 0mp9xUi&\BՄ>c õ@hC_Woiu%r)0k)*acԴr%9TH\AQkKU$O FԒ*X|.@1?0. Rj  Tچe ȆvE(T>^4p\0i, xejr ($d8ee`NkղIR侠yb  e%r㲬U!TrE`L`3ڵA.C, L:=46#]3cEfyPQRZ/tAz;4&s2 O03p"8kF0TAxAkQk PtlPS5 #T k5 U2"2Hb0<ʰ*@ Z 9k Mp;#[Y{ŌUDm&8ip>2QDH0!"M@ j̰gyg:ԧ`Ö8z@2Ԭ(0+'#h`fS!yCs X!̠B~fąsjN\QӪAU<* Y" Woaq%$ A}Y%P Vv< om ^X:?("~T}%E-AMR t H7VEhvpy5-d!x2kxhDP(c ѣP.O96HA/9І(T—蠻%_>&#T˺]Je`PJZ%l-ѧEk@.Ia:L 1^k30i mB@CJ߲JÔ':V&j$P@0ufU:9 4IB#llef =Mg'QEEՖTOH!JJ8O%%om2&L3{FfĖT2ksE5g"Wns)I*WY.T`=@e e 5@J̨'2(X 7lc1l]@ rV"⤩$SIL \3`Q +U C 2?PeQƈ0RLBG4yUX+~ܯI(ߤ J> Q:9ڔ9bxG N})F箹Vpﵙm~GWKn^uKpc?z9Vmkk\[>Dd^8}yi/[Q:2:켵|ykso->[ɵZ!毕GX-*\I74WGѻz_][a6\il!ګũΉӣ9N~~r++Ʈk7 [n9 ~xJҒEX9Aizǯ>,ئN{؝n6pŸ/ݟs=~$3w4!*UK&$*$<,Mkj?< Qx!꒲8O ̾[5߫w]49r.\s˵Xn[|=gzF_;-UއzX؍EcwvϾ4 *"eŒDd2=Y_DFFFDƱ;w?mM`~ v_.L !YB&(]R%_&KQga.۝r]n|N#Bqy诛bP13|f.:d,߶\DlW^<:LQP{%D5|#,dj״`,W`1R57[ AOFF FS:?R{Qqk6ܚbG(]EH`N(8 Q:GÅRb 6xeD|H #JR|B߭W5ΥlfcB'uÐ40!𧤶 !TI  9JQږژ &` cCp4u)#7E];s/' Cf}<<ِ18FreT""4#&Q*C`tcXlD8:%%5RnVLV6!&1M >Dh aQ  ~ ])DdhrJu8 & &.g,NdaFc#Aht( D)m݌dn7 nW]ohnٍMJ\(1N?vx ~lLӄ^$=kI(A*" 18@I#Pۦ"( #ٗpm{EĈQ;NZfP 4p"S,L$HPHnOdi~𫵄H6IN`&+mLN285 D UAԊ9!Oo^ - 0y?,n{p_`#r0?ĝIG*j\qeWGsw?(+WCɘs0*1np9{`(#z0l0_T_,|RCrp+be& gvd) Gc0ãdanuk6 7Oն]ea6*&$#hF(BKaLk򉎩0<@ \;Q&9ªo vA17keaiT*^à dc"}ݚ_PN}B.Hy2 eI/]Oȏ2E<~lmlAHW*)pzCr?DZc#dKj&A%)9mŽ,ΧAh2@X̿~Y2y{$t0/渏J 1y6+}by2`chϰ֥Ҝ.xAV+)-0~Jw `e5d g#nXadG9#Gl4(A7#ǣdY4yQH2xpjǶM'Mk}bDߠ3T1hdϣ'7.$RuJN8ҶW~5ITjeYn%kO S慄)K=[Ї v쀞zX6TJOHi= ?Z<~ԆoDyzse80SX=AKG]ē^-iҎAFʶ!wO?}ysp̘@=s;  ‡Jtf4yY?< XqFxOJ} ƱrO/OGj?ʹֺ{i$1sA|Hι hQJ 82OtD&3tHYYLj=9>U.$|TRP^~~}c!u, B^MhvlFy9'*=,.Rt;|F~b/~_OJ)d46rX}} AW88\ȓnTCrz μޕ:R;#K+)$ps6 gg^r޸[juX=9ڶ`ų~j$Ep=1u{{aGs M=/pXa: B A\BBJS@1vi7R;^R  T-ODF`="&\8 W+_w_-o}9_:5ճw) y a Q>2$4,#Jr#'"&ƒov6BU/bB"r5'++=vŹM&t(.}h܊'Ak<-ˬXX[nhf_Y.?2jV9FOwU땜ݪ 3FZrl4|-]P C)&r[Lʳ\-EuVW4$3/FHK;JTKGۜn4$4_gRݥ8rMo azim>KҞ.΅"|:]&sMR`RXC]kWJD><}^|u R!&.6Kwknwi_#:^.MaG f=wڲbnK _orHzm_{O Ts["w/ Ȋ@_4]x1Bdyo7odMϐ43~Q7YmZ7,}5i*u5lr8_# a"poV띤ǗT,J1xDIZբr?y8eX]h邕 {@5Rb)Jy #iRF]R4xJŮb#*O)(RqmD  t/37 *ƌMdy(kD9I>qJ9]]Sy)3t}{3. \R;{'[sA)*+Q/w2{rĨ*!۴ӥ41sP7=cD!/rpZY>r!HPu$iPҁ$&WM K R C6w =ĚL[{SюnZ3Jd;p3; 6rͽJY72nO`HuIDZhRRBu![>6tBh|KB @Gy~#5ړ/%ʂ޽y0\\Pcp~/FE$"sGyRq)y =TJ)UttGv!hp%G+sj٩%TqIn DiԳ=b5֥NyK{]د=l<߭Mn2/Ma%}sq$PkXa]̔ڳ0[3zmCh-rZȗ4@SDؠB$G]pԗ,nQYNy6rЈZt!XXjhׇ͕B*/:ׂk/=*ߐOqҞB#ݫxēa.aKB''zKW7B*L}M/H^14LXр-5׍t/ap`Ưؽ3 ւzmOoa|~l$xlAc& SwZ2AyQa2K'@=F9πח^=Ú@7={0/ֲSjK(r0ua._M+)OaT٧p6!LZs3ϋ|K8 6-dW**^D$M9Ř~Ϊ3it;b!̮d,[f[S'AV536\ X{i_hffʙ:p֔ G/;h?O)QW&xE"7xluz'k^u۶/AQsA{O5}0[1Ltlq4OLhv*t/I2EA8@TaeW[m]v'R[ 6lmO_c{aP<:xv9\i9%Wj83YԲHXi !pׁlV̛(q7. 8sAp# _i25o z-wU `j̑fg"JfBmvhmK=xO,Ix8Nc|%ftޕnv!E/6=q' ASy/^$4P}) nk>x^O!剌3xL>hT2YV:%:(G/UX'J4\k8IFOC3δ)uHP 5=œ6mu奈oBpM\;F d W{"eYwP*}kvoҞLᳱ)pKO-fu51BW}%QoK)=ļ~rڎ>%wR Kw}fQ{Su5L}#p?IGO%XmAץ~ c: %Yn9p ^_itÛ{]|.m'$]-شXc/7wwXРc;pyh4X S_|c z]Uf hy|sJR{nc$ Z*,>yann~~̾s U&P#B$QQB17E/N  kk`Wuw[ ]/ҰUrZ҈=l;fkZ;Y+ʭ)RG4&<*扥`H  d44@T P$P '2p XlJ"JJ(Ea@u vp7Wi$(dI`ZMYtwZ6%O/]8@;cRLؓPA0xX|:c=B@@98ƚ^;C^ &#Z ’pԓ3 N&4Ώ }ďA~ =~))~Wp{3tBDJ]9@wEc}M@X?g-oIJfl"Q)C&r6F+Oݚji\P щ>; Ƙ<r _~v $h@!GqPE9Z^([S;O$ɶkw#AQe=<(Jt?'yͩȩs$92 񉼍J$~X"FIo6 ;|ʭOص4/j׎?yaa"^ĹN.:l wJ4tY .N#!g沫C-g^Z3{SB!TaT1F{o~3UFU(B~bg/#g/Q{gT5ԷGg^39ߌƩ5b9\1.uG%}DOu-[úq%lw+(ֽ8hՃTYKxG9OCE2I-V@N[^5T~ꇘTt:KYy pi03.̑ tlp?{|\$ } fA»7h{#+Lj hk;9\8ꣷw9a~?['n8l?|yWWCP+ }qH&~Ct x"j7&Lh>TX B ]Ka%.5ޝj~0{T:>`/MC7o (ӈ~s>~O=_{C=fP%U!7{QZ^  A䰱zFxVz qӺŸ9MXw. Ptl nwr ;.}Z)SrBZY{jDozyc(fzЗ'9[ȵ[ !W>QG7c^Dǀvlx)h;C:#! > iMGa؛zrim,=;|I D0^_O\+4ch8l[WantARټT!?'vf4=upWrT.CC`H2 ?^o^r!S#ًaxDq8 b|lĀRJzw^$45)'~!o81NW6 8_,^,2eOET?^M'6c< K 4șa@} t{cB?^^Ag37wc??< 2`Iį]x/1,U}%`]:M6&Bz>aUy ʢkKh}.*'\>hț =x:b K=azxl6lD,.d+(g#UVh@"^B]\կz!r.6..X8?}{b+j,0wp2hn},&q2Kqhopw Y8.{ziz~Ȟ.r1&E-a$s@#*KRfD*dLVoxR aI92S33e2iMg-U!A.Ö Ͱ~(cg42aGj`mOv *)2g7.0ɭ WM% '4lUI^eJI-K.zG $E`@4v>SKCXC!#jgUq 4_RO@ZPkk(Z5+B(W6D X[ܫUnkrCr՘P"%ܬn@.7xʚRU4vRUT9\3=,oLR)*nFbUejSg}^qV n48=irtJ܍R{jAt2H'1ֆ"J?!6O+x3NY槤,ܶ&>EiV;]W^uٞlo2Poj֩j#JŽ_3r3;eo΄ޤwq>z)$o^v3H.P?ZP!Y+#U;1)˸i2Izwc-'O!ytFIbof~id-Jѡ`J" މ!#_MŸo|`f(ԣLb P;rZO`J][&@[%HB qJ&o4QNGo߆"& f+5Apr:T8Qvം)D =JN "@GTeL8[vE^qe}Bf>X]㷯k+Cw9};c~a[*qxfzsAR`ր sbEKlR$$^d3!cY y`0vl9qzfpߗ'GŁ%oa`'QN"!SRSuHjhЎ@{v6jâв+X=H;ٍh*:ghL׬*qNv=FpҩRKp<Pˠxfno^| o):? {)t^Hץ_7/n`q%jHhbCsQ#M2<vƤ,$1*3PH0He˔e !%i% V֦تV[AAYcjfZ (o]v2cZZ3L2ޕ$"`a9Àڞ`x{g ̃UjTv7RIRRIRVnRQ̌"##2 tbYhAc x o N`2k̅<1j .X{e:,AA9]=zsv0fŰ33B*~[T5ԁ7/5=*+wFB m9.>|-. 5itTGe (ti6MrFcyZkOV|w~ >Q ΣI=_;V֟Xٛ3kVCv84ZW%P-~?NqOEXµ`RH7[7{s4ںil(u5Z](n<;xقv\vBE0Ն66cg͹y{vl3*ΐm6i%J%@(ZkzVe ۳fr5y-}ke]hY{TeBAwW'L6GF8K@myF˪d|vBaRX<6=Yl;p֘fm洭fv]]}~^ VM !Ђ"MWS:FyEPK\D):G]6?F\8]6jFDGkz)sTz%fۇ2D$BB+N!Yg;%( MXڊKEVΎå*v;>zٮ2 E$MH&8 l*kPJDM0#Sv01e&z{ɎilXhxEWfӨҢ'^lʄD])o&v=}-Wэ7h hO| n'&оB[( c\Hͤ&cLgJõ(J tA. )0 Fj= >ct8vp1U-{Cۯw5U z5c9j,p]6<]VUɮ`ŔN֋1!N@ڮ^, 0 .Ob5l+rv*N܅*ppjn^datټx?P\ ]#h%`w  v ՀEƼaqq!yCM0#ׄT]z[  :[#*sb(јSXsrA;,%{MYN2K%XAUƨ2 AIg*R {,`iy6>2ĻQk8S:NR؂>kyՌq1C;2d?/" Ub3%Zd8c&ey6_9|}8Cpke<9;Soٍɤ|Q{E^rE-W9Cx}kY}5G Hю@kǃEOA,be/XPI";u%eW*bGmQ8Hf2DUPܺb8Bǂ])8qA;\̳q"#q "Bj@Rj{<~bu=N?q°ּ 7٘ps8!,%"Xu\m!9|ș5pA{q7熪9~MyQ0Bqa'5eD+Cd#B* C6*<0=B 8$lk̸HH,`XL[2ILqr9fP0RK! aH( 0,eRurs?5zf[f-t[O.ocPCkeRS?z|?=]`Oo߼D^#UR7w>B&7'}m>Q6ozjP^yZ &{,rK`2x~]TXvmP^NkX4GZL}n3m9fnIϼ1c$r(ض^|1RXpjQ֌qHRk+[=$j^V`RRvV󲗵W, ]N;`J:ؾ m Dᒆ]rbc$ZQ醈L m4vl1"?OC A_"y9h~+~Fce]@Q˂Fc=_ k|v'`bZ$ I$9X as/rJBX< i-vlO1qJ1erXʄՠلeV(@;*>@)>A;8^qPSxQ bCjX} kOd-AhxZ $ m\hk.j lz]/1߮9T6 5$=yݕj!#^ LMeOnUvX3ƐwI+)xG镚1cнc+}ACT-P}_rvvG@BWx1Ajj_ť/;㫙GK5*)6r%E=<Ļg"b?ƣWsgVD JBLN(jΝٴQ)@3~9Y0#RsxwO"g?Dw.]<ݻQ'@T#=P=ӽ>q D%U?DFtti(ńvunFw>RZ-{> '2JzaRsJnp#NAj|2PG7c:eŧ86*S QF*ǯuu:86ҬSpݺ"5&S戓d4+ ]K@Ios=h`׳ZXm NrlngkI**z*+뺨7c溽>hq6:EPdzdfBrWEs-0QcrL>ŨR<R x;4˙n!X*n2J5=]wL#^鸐)K%iCj$fuXQl<G;řa|p$t%K% 9f9GWIp &KG @!v'I3m0ε+^q>+{K+[Е9T ҍzQn)U.HaC`,N\BbDZ4|Q- sĭa "b)9v8TdgL=HEZsS,qރA2 r^&84( ԹWBr&f Qp nEM7 5 ^ȵ${OƑ YgV"䉉;#ϋ7uIBd}<8nTh:ʫ2i,#͞ )~OǰRșqtWh,D֒"J)b,hho6zϳP[DNt`u+ЁBp[`6"c_PŐ_ 4$ub%E^9 e:_XT[DD[L绿UھoP chQTeHg (' t&\A;9dhO_1} ĩ" Ƴ{+s1 ̌};7&yDKB3g^8NЅzs`' Op?4]4;^]OW|G5u)?K\pg5Vt$!.E2U}=Xnذ>hR4gn{Znju} i0>ﶿL|RDnHȁhLծnUPtK@ܖۀWprP) g3OߙT^?BB\Dd o@n4h$:hd_FN"Xu !.E2U=Zi͡dBM"#ĵq&{%HXI%S9ԊZK[Am!C/EBY.S\SJ(Gg\/U8wPkB{aJiD>߮vgAѦڭ 9p-)at͡Ԉ =N/s S\$48ʕ^zcGk@ Pd `CM@SK" ttRWh G7UP;!|F &@ 齄,?P:MK3ؿ=z).wQ~#BB\Ddʐ[e&ڭMDmٝv+FvBB\DKdJR{sYi VJ&S6mQҵnň6n]HȁhL +lalzr^8 o!D!@ ~4CpKI6{5pVUH\d < m?(kxϕ@D fp{ yTR4Q.hc5}mۿfN _r"Z"SZ}$KMً'4ht;@)FqW$E@4ٴfvvCڭ vwgwF: eJ3nY_=۷?^͔\,*5VZk?\^+{+)dTmYDȯd>{-`*Eۧ #Ff\2ħ20D0xqttvuY-la;b$V|f0ڿ)^kCKZN@ s$"k=;`K^` fmƖ#EUo=X(,Ds%KZN6ަXcYb4;K:% ]vȒw R%*kN 4dli9itѪVE(.c9Cس`nн-}FGҡg?f%!;Xd c7}c;7},F3) =?|sK+Lg@ew(;}G~򙈛f2ht)n,a4jL$E>~[ܻ xսEt͡;q}8fGzc4UhaΆb2gk`P@89W7.W>E)NFs.)/+f:'bW8󑷸g_svx@OГ'0&'^#1C½ ЊL`0\ZJ>Z Onwk;e8/71F}ЌQkLUyuv';Ku[DuqP-G+9{jCCaOIƊB5sFk1 >h[/7o^FSZۍz}#K/:'ZZA^oͫA6UpuGQ{M3@"5 5hbQc&)q?Y#;oѨuqe`uZHJ)OMjBp7#QIK E%[MQ) 0G6 6Mr!VcZG~>JkO20<RAt4g|6I|)bNId<=gvĸ h׍Љ.&LץeZQ(@ASҁpW|b~6@ 7y;OG+B:2F!M^EyymnvMhڮ{qCf)Ọ=iihĚ'O)$&je=~(8#[=͡M)!j(U?(sʿ0])bZpR՟RspZJy ܽl,bP3J({ևI [E*[4] LFUF)ڤJIE΍c In(*UHj"q :3mAEtصx&g/c9&X ^Zbm-åPH-5F RI\U $RD+XjiE696ŦG=M\:,L-#s^'e}~sqcoĜ5xڌRTesQ!'9B~Ԝ+B_rlпA~]/j EHmTh_V!QB5XzIhiZ(M&Zΐ#$42JTXup 8J _)mENV_ziDǪCI ^Lig s*I{:"3$ Ke 5 Xqvq}M.lŮmhh4SQdC1B$i X p@]cz)2e0"RGbd|KT__0ʸ߆ x0ZhSq= iw+? Wg%(dk>&1g>;"Kj%#HS5L6(b<:2B&yF;b 6bWOEt1gUw'5ѽ;s:mP@zR]WO/-o4M<^V, (ؖO^R;(B><>FZEӃd"&.TT&d7Z{υ \i\&)ͭsAz" ecDgp㙴'Պ%D2{l@bj6V 1҄.0wxu19T"qDyuP$!+)A 1;'W2v&jokSp(RYB*K7[yk. *{]XO/FCw=A[.7뭭=Ͷ4lkOf?DZkIh@IJqQp/1nJP\J⢍>++"_^ M&ʸz5p:D`x7czZUcfQ(~ >r˝D&V%&T\n**-ڇGۇ.'53Qab Nz`,\9 r07I㦣3m+JFk:ijMg@UTl;Hxe@4m$름I X{ xMdG#)"N<DU9K3;!tխuGZ7ݣ,#`Ip3.$JȩY\ DB%CG$"]{o76p MIP4ILPQ&;2ͅ蠘/SpS/eXbB] /3ǂ3m :t˙coqL~Y&gg_.-*`(L;vŚ@ zmDexXZ;!4w/K&9QEŐPF3ZGo'}fJR? ?Fk*1֧ĺNFc D[h T" Z4'Ρce!M܇GjQ5_4#<'y䳨iFiʈ$VZk )M%9p0O~HYLXFpl"@%w'!<\+$$x]}@ZbEddtWܲUxG[.c*H5"cv1Y j>͒[P _I1iF-\̕ ]?F*W(|ԕASBD[.%oTLdKQ]X^+s#mW)УNojUG-(iDzfe ]D= .}{<䫵ffPBgz9_!끋^c1ܰǞ y&iY"UC`%+##32RJfBn&#e\AL1.f"+9y%P#8g{0_ xbViM5AF[׸ Mֻqg3iu ^q_Ǿ٠Lsf,Qt1d<ƾ\>:-( AiW2<.[xBl(NvPp4D;cE|XNuPp>BGmj n IAёύdmokL>W_Ԇ%Gù$y袵GE%#![ +k17~o# Jv7suC< Tw]?J+O^޾[/[Dе?M8iH8=QDQ"aX s@;%j1S?lb$$jj9 tkm{ہ+4},5x Ai}1OS`- )Ӕ5'0 l\hL3YK,D5.4Nrd`6_aHpβ/ucm;`[(ye<+My0tl<ܔNz[vJxo..or8/1kr_d5d~zY^XEj]7_\,F7 7E"><  lűH :DXCuvwAKi?$hx)z+ tǘ=GMhxMװ Lf)tDyvC^uztAlHHcvТI3iؤlu= NŸ9)>^Ӑ0/FF.f%e"F0D6u.?w›y<*a%@2IT9\{򎵫!Ah4Ƌd1sm>\瓫Kp}+%\Wao&'Qp@ޞY̒6jUca79 Jx!%y_v+I5o.QZȞCz iZ9& +SQ0ҪG8j߼"0[ A+\C4Y{{i41!6~6 El%8.#IUP߭/Y.^.M jZf;QJpVS!E/D]aWos7\3@_ ôdLYX=ʱ־GUNʂejͽ-QK_aۦr;TPmFHB7@j @FQ3&dƻ@DViAcQ1uXbÉ;.TZn/K;wLQp o'92 YLx"0}A]F>=BNk6n 2.cتapV16l*|cozU)nh|Sj[+|KZ/W?йd Dp%?~N(ey%[9ZEiCYоF铭PK[E6ohǗd* 1Rn*$ՎT Zb :a Elb[k#UT][dU 7)5+ A\߶V$׷o+SXr  cg(t>i:|6)eMHat6h0o-oTWm3R)7mo#f6|}isJ P=VFb(=6X%Fٽf0*>4V\ ΢u}D'g؀?ڼ KA<IC!&O_nq~g&8mҀbx<%2cx;~r6t έBi4HLҢ& BA(! ot{60SlLmbH`.aU+ޥ[F u$Ѽ:\g0H)ϙ4*ÈIеhp!'\Hh*w_f(KD^׈TZ?y gLk1!睑sdEVua^ #UѪ_i_ JSIgÁ ?6!?);kquZ\W n:@U+7Hq#FȽ$hhU;ʔ3c)+Ut:.oz]T@WeM[/Nb&r4Pc7ak_!Pkto!N즄*-G,ꩫ_|h>0b8[-ޯ> 7$C CPR9D ͯSh~B_WC@L0$BJF aL:^J{0/ /Mc3;P|oU ~rk|hMB&hv-',F+MLQI+$ǭ.4>,1:CNJiԩ܏0fH[u,4A0wV?BP7DP!={ !R /@#ۑzKC5A10`b47e K5. A}13f4Q1d8U3RGi"UB'^g"u[ \(,;|EKR$ND#wQE=SizXjt.P+B:R5F&}$b9.ace֮1]Ў&>\("|O-L$xИ-ꗘ !rx` BiàQ?18?V?UD Ae NSJ۪=5Ɉ9jCHu@]#0pQ(_]2;mcuy*&S;mUbI{w7G|`$ dbABCtf[0&í^CΤ Ȕ+gDMt+pȰ m,XDls*^[pOmW,FH4e#@ 6&`cRwTx=HAQM BI7nX ?u[ 砼 J0H$SFG w̃M7i6L8GƀFh\|~oIa,҄*g#VWP(`) P!uZt܎J g΀pRs8Fib`cr $r*yHM}CiØ<|PA=h,̸3"< GT,@ xw4۷L|71Y>,u/\^b (_71]d04l\\N5l͜3*t77+Ӹ QkdN#Cw:QxGT2#CCSëMk{Մ,۲"m]mz~:e;<\|%yYЮ_ShQBs ia̋vJhg)0`bX9Hb8_̮q_Ka!dܲB, 㲲{K0B+W_8JP zʢxēSĤ?D#iiJ6$h֞E-Iׇhf,Dcu45BjiBlSnj+S8շ|O?yR塋H|ue׽٧dRDt)~yEY'r釋R'VGhߕ+L&)ӱU ,1NFM}YM&Շfz{HeW(eeѹBY]rS3_HPD4"pK;ɵQ((2q:~!O))O J1aDŽ˲mٽP#ʲ:5᭺mS.{5w<-BH>H*~C醝Ǣbm͒Peh.jC*çFkk=t8&_ރ̲4="lxq>BUeUkh8PF8NK5MS&] 489~ 3qJ:|1ɓi¯hg_uu-Iπ.缬>1C,dgRC-ʊA~FSO:[vJ)<QP*̼hYS9hY ^CZe6{/s]{Oϻ.R.F`nvEǸl":7v1ǿe+˪[@ejLߎA>4_ʥYq5Z+IX}qXa%Ǻ"VZ6R g3iߎMx(la>}jc 6xGw/8kk$lW?\lґrVm_3wjVi.y]*(WFd "'Yloā חn䤾֘"Caoۆۉvth3j 7 7ef~ҷUV(Ƨόroש.^*ݣ:K63^]ڭnRsG{n(y9NTuhDceUEHybQrݘ-dG4o(BsG AAN_XȎ=(yVؒdϥ9GIbF"XZGyɣd7-X,Bʳ '.KVvz#ɢ˞5 X(aFK,L\QrBMvڏ_8'?F(Ѝ։։c 2s.j9斣9p˘[{ 6>G!GnͧO;Q$ycjVtda"eppѶ퟼>IZ|mGm7[C5$uS_A‰ZaZv$E=kV0$.kTzekDoUT>DxC7+s!%^U\C$nx%*2,^}sAoJf m8]C]@֤&]I DBew6*h7;?{هt e )Z;7Pwr_zj2=ݙ I`W,F Yx|[wo]$I𺣬wp?wm-o_/-;!t$ 3BȽНwڂ+T+B[ ΂ZwF[S_K؏^!mo?gs[׵ZhsN^ymT*]I]Vq Z.ShvQMIv7"l.\?tb9Ŷ,7rym1#lO=͵"qc%ǽw< ĥ=s] D] λl728BKB_y6ЅgJ$]8Y5nN,+_M/f x3w0"OvLȥ3%F$y $$k fY5e!n|oLk8βx`[3F0a^{><,>pOןsZ2f^>34)"Q3Y|ijGQ)r!H8cc֐`4Z҄%QDFcN%B A92r+mDPx!=‚"f[K2Ga'Ed4TA`ZCnb R'~~rҚ\?? MPF i@s1f ,i"> ƅ$~ N-@1O c=g3엽^1 #@g&20N ,kF9u>U6h0SVy}m ! sMH',VVZj\is wz;}(>+~}5;ICi x^vrE'9uidh)qǙSXA0|& CϸANJX=$S%ѩu 3 $i<%|@we`gp+cPsS\珑ߩא{d$R^2Rp01 u<&Q F2ЃD+_3J4"5vD9آV\==<,G$ؠx!F[Oq)PRg8V£H|mA.W{XN"J*^,&܍fO$d^, &v^!Iyv<6F*S ?%z(j%y` ]eۘ<,܍4R5q%ps ղgW.ٶ,??JdC%\zrrTNWp-%SGY7mu+A).p[4mW% Z*$䅋h">k]n6XmbWԫۺ3(hݪ.dJW֭bPDtʶu;\2r*ec֭^AVp-%S fiI %)3=FT5h̴ y"ZF(᭟-[nݶŠmcvin7z֭@Z*$䅋h~V}ݺe6ne1(":eźnD[nUH R2%XW߬Y7e+ne1(":eźnEU9-jݪ.ed5Joݺ1 V"S]!W^JPc֭@Z*$䅋h)vAenw[Y&H-mD !ܭV V a9m[݊nEY :k%U[}T&;=iLhR[R&h^nUC MY2⨻nRMCLnBM0ztg`Hd7ͬU F w3kZ<5qnfY@0Vf0Yf֪!ͬ6Y{ 3kk&nf CĻnfJM T݉kU7ͬU bߓ8infY@ 7F5Z7V&hŏofעv3k/`fM4['j~]4 Л(=wxr1wZj`֠ůD_:HOM/-t1ゃ'\`q5@v%xTGFwC^ dʔ©p 8gHp  H!K!ՖygcRMXk9wc0VE2idj-fkʂ H!zaD2l;Ni\ 0+,"d꜑GNƩ ſ{ _t[d=ݿy~g7{i~I^ ̏׫|V(15-Ӊ`̾0Ɔ/mn y۠oore77"Kpwo~A2WݯYi~Nbj}v`vz?|o͍?[>Ok &V }%!&a<~`8K/} ч^I!zEVcmI!nܨS;eG2wqwX%~5Cy2a KLSӫh\mH^W:"s\!,D eJ1HzyTHx&eqY!d4N޼.z1[|2%~fOssv48ӓf|o 灛/bd :GedUV#Aj6,L7fdsXF͊-Y|tt/MU:*sZUx0os&>v6orE4].Ƌ-՘%赑2K$*Gvë %?{WǑJ/^l5Hzر/yIͦMɲMXI$4#Ȉ8bJp(]R@7 * RwE)T!!16_ڨ=Zrԣ0z AgѺ6q5jjnmHDɵ]$=oVoS(lG4nK2Gn/!IwIe{7ȲBs*R/&59c! )~S+JN D !%o%3JFɔ 9yA**=BX9vJI Sc$URqxW?w%zxtE(Ƀm╞v*oӏGNȤz^)δ.fwQwi:Oӹx/.WfX3 HD'U l4˂"AB9m MM1PD'OTI(.δ<.BM찼FƑePeyΪ(%% T\ ZQ1jK@TKs5"6W"v[!$j4ikYm*`eQc C[.;P) @/6keu.ٙE(TG TH} RZRmURz/`:1[fkMmft7)LĔ-ы-V:驷īPF7{FѸ"ؒsgdt0[3a4b632%Zّ;O6/ =g~Pif~t5{܏ȭZ+5A_@8$&| HTjA:AF&=ٛՐ ]j·ѯ Y#iahއNڞ~۳Ջ6z_׿g/o-@RBp7H]_Jj0KGA7ǫuxcRHID qUa0h~5I|871Eh/-.h?+q$oMM޶Z/?w*`&D(/f}bݵڜywXƧC:k]Rt@<^,V|Q(pj*z>^,Q-+O =|1)7GTuن2y+lUJV7 mDF8"|{ TW+o]"e;Z~ݵS\= zV=-!4 s>T%skL۩_"ޗ]|r%1Bkajg}w_֋f6cZ * c90D$BF 縻y~ko@!)#ӓq(z4d|@K(чLLS;_f#׼yKß9"--A: 1u`4R)$7.KU: >Hʔt;e* ٘I_)b^.y$QѢ?Wԗ%!::m{m$~64Tۊq䡩Oہ<<mSZ~'SbkԀ1{ =Эgw85Bk%pj85@y|VXQ Mkh=V7%8,؃Y>]م?6eUw(l߱$?m2V'={c֤/U *h'$''۩kߋe@8ԌUt;Ņ)R{ ooC֋8oCP=|n}wmΕ=q-t&@ڳm8F6s}؆Wyn-bhn7=}DK鏏rjЗ˛%EWonD5Dq!qW8x&bŁ9 IX=3c&'A \p7M:iu>Y1%g3_g@JWjXl r2FB;9ryYN@%k)A!,u#Lq`mZ9eս3w9u`,[C[%ꤎuЯP===_1)OHcpTYն9fv'7񾙧'UT~V%B>cˮWƱyl;O >v[.~}%7B2^Z(p}vUDYca݄'Jc7"^J^Q\-:!HTH׌ː`}3cUAZ9\w8a*~BU0MX1Jx joVLOyn3Tu"@KWq1X&ZJ]Nv9R$aLoWk>_?"q_8$dvt+S5*Ln TG.]%Gݹ*L3}ӬVʯ-69ךFq褐徥dAy2:PZ3V4U@@ cN.f˟]{t Q (]"F8ЎQEkP͉יeBlZN7|VsWy|Z.~=gщZsNatO_QRw #jp0bufgڻ3 %KCa BRJ]Oք8gB;%̀m,pvH02 LCXԒM_YЁ9^?P'x-hj1YMjM!.)V$4txُBm?+U1oN5Fiej,xnK8M)U4JS%YmE QJJ61`VSx@# ŝdgۉZgɟBo(kBdÄ8$]!=&I9?9Im9`'2 A$?7)S nKg%dRQm97.99S"I µ Mi9sQ@:-=6 p%kmͭV:IT̪OAI`Q *[c,H*$4ƵL>g?ΝfqI6ٜ=6ߥEoBIN^ *R||!;FȸgLd3 l ׹ѵDBUeIYI(q>H- AP#xta falh@PNxh0zZpOM"0,1x-oH "Ҏ|B7NP6ȇ_MoP{QRi9jWcɢ拱MC$ VC@ G!:1xeBk6k?R|1vXTj ,Hc Lh3PF45&%%P{`)WQaS%%I9ډ@if>S36)cfT@%87\vSt aFXrdlD HMA %Ƒz1H -xXx~|R+̗P{g>gZl{GgT3Dt{GIYCE?D4;Ӏ5mәaʇ-Sǘze8Y*31A߀mӿ1HiNt)*V:~L`8nP&r~Gqq٨Ztbѓ^!@#ց"Ɩn FzmȾtkxx}+[Ű/oyUiB]7CmT$k;YdQ2.lHa HuiCd`5$fު˚);{dysD{ Qr_m 2Hvhd#oyśKDbla%24ͺIc'$#}n3xחI*&r}`4P!\DJ#J_9Bz4]]Pٖ(~j2D^E8sZX׷ɷXE5:}jT.34tSVMsGG!—gŇsbLK ElKY^^m]/?bq, BBgR؀/$7d;%9k9TK '1!G`w|W=: '>f!}`MΆ<8 WaI諂xbY7#zg.4j!PGlr)KIQ"IgJOP W @ZwtAȵ "8*S,53Ѳ]Ys6+*L͝}q]ߗNTjte H[l)MR(J@nwm",888aN22AsA/e( 6M$f@J)_(јb1i2E<y*R ? 壮3K`9bXg;%UN-u z)h3f>yAB^ycTsv`Q@`w}y'L lB `1Yl5ˍ&fn֚ΰQ2T\`8tK򉿂[8uDX ` | , ^[,]k̏ YbY+$Q&>3Â5v(@0V {F6MR)Ɣmy S3{YżM]//-r%7{&DxW* aJʞd'tՕkPD*5,swf71O>>Bgf>SNo*OV};o'fђ@J^psݯU$x8Z|>}pS'rPS TXXW=(e7xM'rMcwxE=[9ժAyp6f3c5`|sD`GHyWW3uj-:o5v?Gv\h_y!!?koS wJ.`C} gt* p J!Y2gmRkNncz >>ll~D{_scqA܏fOR3#(Gz'\e|f4XqΌx[~wnOT$TE^J~wjs}B]R_pk:}3;g-K<%$Cs$dw虐 ])rף^JHFEOyoTܦWsz(GP%.+PgqML\gF*wfq"JƱBS\Z5Auy3{ >_r$ar0?\k}).<~;{HtI%Z_Pg\sqj,RI/=j3I#LV$,fRuo)nsaq9Lq iI&:*NRsK5P"I_Z*R)[kG ` AZdnnv |h^rzKApLrLaf8As\jUJ1"TaK5T QL$z V):]pD*L0X3yJ͉@gZ׽TJ*k|2sȕ ߧV8FcQ`O~$Kx1/==,Z]|dM֛j83^=N?-QZU(\]:˱ǁǾF! rlnx"9SE!2*T920\<{R4chQrѥjM$FT ټg ty/eqTz((+E!E̎cbýӛ(a>etBrPz8> 8r>A)RJJN0=3j/{W1j9"R իY,8c؀rmDuhj4\: \:b-)#_5Kՠ!]K׹P c&ҟkżȈ^]M^ ٩V܈L0JMT R}߀}?`Z mDj8_ %bIwEeu%ۃ(I.@g^G )歽%E ,`LJx_֮y"2R{b7\b0T E^-79r̗CيSH=AC??wzu}jGleeYv;/fPԷq(cg`LZWu²SGz#0S{6*$d(A`UNZq?ƲF\؞HnA9ke/-mmJQ ,TL 6RzTgŸV{ĿԷII0c$ v+)})NQIVU{Cĺ(!a-]8Top3.0kz}S>j..+6:|hgQRwE @6Mɝ.Q&w7:|һץVŠ<KF) fɢ R }8A( ٳY[Z-P^q}r-B&9,ai-1KP]gpLN6߻ 8jEȕ nk HH`I˭WT%@l9{9ƕSuY{1zXdl^>/`1/t$΀w׵9Po>ΧKjպϼt!qtqXL./1C+%,=v>6Z ǹw L$OQ K܉`o'dϑ%δkLXNm-/P8ʌLh8$ЬJW#RQ#;ګ-x "D4-WpzY4\k2^cW"Hqb'JeNb˅dX uEEHuѺW;S +Թv=d9zE(\}w G뫁:iRH;BaI,UIuδ2c,Բ9dl 4pg(P¥p!10*|fS4DBh)黨 /ovP &K1BRZ2w "6< \C8U$}+) cRkb}"Eu)l-_5ZfZ}!uTP85(7(aDPNfRTdD2DI>!TI[+ҊP¼SW_-H+4O!OQ?}!W`@}W\ RV2I׃.hq+%z1rNsA͉^@Jː,R&.tfBrD~T[(`&ᗚiw(cB ] @4d$ŰHzEJ-3ϲ]ҺDaYM1*~6[lÌNnyE㟫Ze*ul?] `bء"|塪{ ֒yhҽw8As\3l䶰p2Ooa*7S ϖK/XLۦ5Ӄ~fz9tKӛPK0x , ʧ{iԩE5ꭗuiV8BVjwnt83 ]J{>tv`J1%Cı z7X2t8 0U^{s_fh^zk7# K*%m.3)c(&SKp1љŜPp&:\H+ǫFƣ3ܜ28npDҺdFmtn'-<|?Dwy=z TgBgM^;{),Rm A+2JXEWF4F-0DQ% wo;2( L:mn^\Y]vO=٢B*%Y40agGoi7&+7'nNM.i&k߅+P!_8FGŹ8A9v ubhNy޴[BC[Dr$p&BÈ4gDaI\eSmئ&9BG˅ aacri +-. -ajeUVpXѦ^Y熈 N MQJhZmzfɯjuq&ƒl^??9kBZm{T ϲh%; 0E ''K=*(rvDBLH)复%ʑ!Oie's^ʮn+KVs\t>98 ~ic-ML1VI6遟GLXkif?UEIH/NW -)\6mEt gTb+(LR8Y&YQc,j%_4du:;߄L."܇(W&2J]0KIb +L6\W*T yaDR#/D`׾f0z"Aj稐=+$og.ӛ_h.~'e_ŠN_fT R!ܖam[<`#I_vXř4ai<%[#!/)*J4 Ȫ/"Ȉ1oֹ H.?-h/K W|7kCmynI@o6+2[.Y~>1݈N嶿tΒIbʎ˧r}8AVc>~ʗMd\S|rz$ GJ#VҒ@c iQ-.xO ;繖 sw4|ּPO? ƾ9/Lf-˾?gfެ9Byx7ڂV4B$jC&xXvQn'+V׵;+.H> 슶,]~^~477QbuYܻ_ -It{=-\Bp)ٜҢ5]D_/cGn~wk7}Z,eZzٽp[A)IBОGsX^~SGx֬y*qϧ=d 53zfԷaJa$7C,k&* p2#F(4R`I5}{dJ+/ҪŃԋ=(ypet3#M x3K1bGO7y~[TbbF~Ox57U󎯁Wهz^U=x-*^[a؏WQ1՝ZױAS_0ˡΙϦcijQŘQ³7;plv.yN,0ER_0ҿ{1[640e߯o(;Ζ!W9+3~y?+`ٿܸ,&_O&]k+P m7TRX\%ϔRk5?|3ҏQs8>iaBأ{-ؒ=u5mgƙ<ۈV' ,6NGc 茎p)@O S<ʈ% SQݲq0SRB̴qb6JG^!H-#D I1<#W2dtGk,FO ]<$. @z0PB`ZǔIʙثG},-uAd_^wyrR_J Vz `s<݀qMq^g0h+w^1Xe# N*ti~XL.DE ;JЋvhj҆hf}H_'GyxLdf /NV(?<[-cbMF9'U }MU%t=]L߳NHxǟ?^Y.ZpϔT֔#CI)춦:%՜*[; QXƊ=vq\%*nR8#ǂ*aX.PF`cedg*Z"Aټ2X2}q*EJ!+R+*d$9)B$7ۃ$V7.'V+~uZ3tZH97')k)˙KzhI@6$(pѪR0OL+6 ݦ̘JXKS +Rs 109 eND ȅY\-rvL}o/3c|TB჆BD:;L)Heȼj-\Qe)cmRtmpK u˕|R7#2eJv4/xwFד>no@l n^+1p2T q&ſwQp-݂ejm{{^ ww ӫ<됺ï9oQ>NxӅɗo'+Q3aɑaIJF`ScY4z5źk&Q61L6 ̣xd5k:s̛xËV/A') Nv>O;`ɟ٬Jf;%L.q::K,oBIǖqO@Je/"íѲ2@[Ⴑ] @WxQQ`T6[n1KM^XTĭh1duI\!Iɭ QΰQNu3lϰ rXVPMmƍS5ffL-d[.(k!JfB:fzV*7éiU8w_Ƹ! ACpH"$_ Xw"*0X,SsV,A[$`-qLg (sHH΋զ֊ Ѝj۾]㮄NU ͕Ŕi#5]\> $p)"^`^2GOƎ .!$>?r wٳD'_&z0zR@7Ȍ]> ւKn.[wd<y;C :st\VLZu@t628vwy ?mLka?g/D\[ kHU#עQao1dDEJ QzbiRmKA1AL(w oOd.a;TL}uoe ]M?;L6~8okᑋEXAB J-[Dp }H d| k(46εB'nD4n$l23N;b\ȼs8=[@ 4[] z`X̀`>Y`I1L֌t~Fŕ9ƛR?d4GVHΈoZ(wP4p^Yz?toZi;;NOn_A.i"u A/T{ T[xv-{dwBbH0~/.|ş6 bF՗2dŏ_rz2pob]v7$C7lgeN‡1Fv:zuK}pFD7rdHUrG{D䥑YA ,2ZvD!嶡f_Ɩ}$0Ȍҵ uk9ekUC{ b7'L-:,Efg! 0Y7|o̸V J˭&ee[T )i53;IXkFI~n[QfhF^M-HZeUՇSDAc]($#pA搑u ~$c< +lSӑ|>e4Qa%/= #By2epčAI [cѵcL dh5d)R#tV,nu 郾дRO.$IeÚfl_MӜ OMg`FM7<ց4r0dծybs 5(e*2w̯>M:zGϏW wޞ)rW7ϒ5b2)c KwNX ཀྵȭw=laRqK5G~Vj5x%ٳ޵ƕ#"%@" %Y' Ny ^֥ew 濧$K}.Mem5YucU;CxNP:}NAghSlU#{f-M}6w#:?AK'7́,S 3kl|qut+ޛ.Q.WqYjdV:ςnD=W`Wv{l`5݉^*U> 0'NV9 jDsbJ $GqGd uSEzPt|y A"N|MimLGmA2̝x>:6^/VWElHꚄz ;:x=]> QZ ^/͙|big +8qdr8[߆64amBy4>6&H? _~LN'ì5/bs s BRJ<\#_%(+$ w5;(g ݃5 b-+ 5~M TIvN??#3LQtHS[n-#}v=Cn`Ov)n4qk)(#dshApDiE9{E=TW֨YmJ9L!HXu=@b׼5ҨȒjINd.k̈́TVx H[is/uvcQ ksxݣȶ]ou8w!$%BKMN}nQvE*Vr ^c)j@iY4Y'WK/MZ4AniEH%x2O&r3klJ{a•c2spE(%zEr`xe. W M_bŎ/VlWx=EypF.tҬBnQM AuZљOi5)S8: )Bz~ ek-굶XZ/g*H/o,V`?GwG+a1m3xZW]?mNQػyQv?ss]pA,51<-'_n}kٴ32{W|/vXӸEk監IZZ޹Y-ZOY].64|,.o'‘WF)[)/f??Ҝ_juzwD@'v!-S%kDY Gs*x*SsKt(0X(َkyM}1f1K @U֑H9aAr G=[Jo*f(fʠey}fu ]h'5\cO[@.AI=I9縫u!g2]n&N7m0i)ӎʝܳZ<_4)4~3)M¦ Wԁ(t,a¼QfX]E#凐'nGJƆ#t7Hf"j_΂Q?hg#cm1c+`\KXXϴq|galaNvQsn'C4&."hE`j #97hӡ*wklmdnvX kEgEIHjQ +h+=6yi{Fҗ̧ Z_۱vi)p'%|>IwUz<+߲ҝeP*pH3jeػG҉˗ 1titt3% T?-~6Gq d@z 'mVcX*vAR)DsĜGl- ]m[.[U2AMvUn$}FҳÁV*a DZql)VPG` ( h drL.3+S XR&r9O~ClA6I9n4"xPgomm hpq"45kYb@t;nݹV^&p; 76 M4L 7pGS[p9#Dkx?gFeF,Z 5j~aݷf|DX>&Oy0 P?ho '(MyɊdN*mtt++Q]~}spG`fmA+!i5\:XuzM+zp:sz2Ͱm^>6x[c BV oͱ U;dD @0D Ya=xcFwk߭c}6GTY Yp CstadD_"ZΗDŽc΂5Ex)j0=0$eFr);4־JpJE4jO!t N GhN^(+OIZ$.z TPtETڟ6nn==sh:^A4cg܂xռZK~3_>~ਹͳ` 59aBrFY!;AM qVa"_6hpyp6-֚nׯ2lN>' ʖSIA(0jdCXl.?xSK$TƓP""*ݏy^T4r[d1*aAr042lԙj9# VW4eY L2 k:YMG_kZ;b{;u´m{S ӭwL}Oy0ovm3x gavyE.jSx(qm.Mf!>&b59tvq;@vP;P떑67maﴒj2h##I+KJEQe$STF2uln2Etgj֋߼v㶋Z `PlR؂gf>M odR`IZqcto$h%[Z..vT?%mu\([w E Nºc);:c 6Ao/#44!tA7R:xK#R޵7q,BP]:( p$u*vfgfb!aIc$X=0[r$ӯ_OOCpP;[ʫTIʫ^^:(y[:^"VӸx`򶅵J0ŔOxvY~ [C-Re0g aoD `hcx%r8WȊU[!+^A:(bch@= ̆ '0WJb1r DX+jK)5@V.`ki37:tTtT ^:C;KA44i&Ф MNX؊3o:ó,l@d냱}Pst{g@]pG ~90ԇ[҄n4W+ߌMsxs>7N{{o59=oy|p?滽4[4ד= w{ >in~o;~)ӱ|:1y>o=؋ @~ۺ_xֿ۶ms_WH1Bwou2K0'!Jθw>1&&i<х 2_ooS 0;vI/?2ó+ *>ή͹_ w\]{crl coh_ۑ'L}?v͎Z5^I%5'FMHMg?lسa4F5կ J/s(KK恒wR:>: M?A=lξ듐 wN0.y~;eʅ3 gkvw?ѝӐM)sLcF^:G%ۃTˤڛ6ko`GTp <ۿE*utn$z ~"nF v懛L׋췛=kχ}ОƟo}w^߾w#]jiw/^OǗݽowǽވ= ?O\~dJ:߷4tFZ߻H/1 &0Y;{{}u߾ Ad=|صkz'XRZ3Vq ugGo?ˈB2Kpt wK'2.5N[nw'_yOnEunɖ2*B CsF^Y_PLTrod> E |j_\?")Kn~;=|OLo[vj;/+ͰЖdm6$ LHcg"T3 mCMTQ8J3jPÄ&0 5La`BB V,eFRD$QRb}ǜC { %$T e q1M6pAyLML1BR]8x$G7T-sB<9YrJQ \h8YU[a11HhĐX4m2Gx,u`RL]@԰ {i^zpH]{Q!ֻ͓T-/F7ھ}xqNQAO|M[ꉄ nF\fP#ÚJߚ_QcfW &KI2&`2{Tv~Dɧ=# R=6L{l.x6ySĀ;)Oj8pxv!Gh-)ٛb9^U( TmJ8w0i0 KPDMl$+b4uV$!qaׁhϮ>:^@{h=yօS:֓qQG!xeq y*QynuNSőXl=/fu)@)a|*$]|}y ;! j! sn 1)%RQ(%RQpUε(B۠mԃB=(ԃB=([.VPBaLjJ:t1E\'/rjņ"mΒq,o$ ǔAI&Ppu:YGw N(_V%"$ IإU9ĞdA<[m:SlLjFojFoGkV˭ \yqGPQ/ PQ/Vթb *HQWL NpkCLlLʐ $Xs9"q'Gi18LƘzƙ4XoaF_I]}WqEDžP(/ P(/%KU"R2%:Lw] FN.e [V(ޜjQ(LQ(Le S!nRΕKj,C/3+۸qJ.p, Sl\vV#Co333-dsAaH,*1BRX\Z7L1$N६":+Mc&*IWK#wjzGwjzGٚy(\ vȨCWWAܸnETHPEL΃+>ÔTH_j' <^*T2v5"p`438JCB\;VEyhh@ `c%I<=t ˼f hfP.FXiPs @|͙if&hZs8É;$dO46L:3o읇]F̋+dRu#9 j&GBݐ a88G-ynjM2"Jh$"XSUQ4*tWX9!~Qeۘg܉g`s4.i(Ӣw95Y-F4Ս(I7$lvsp߈|ZY9UX<OYԵ*XzIlb+e=PV!S` b2GN{::'ѻDsU}VpÎbĴ>+t؜c` zs7`7'ٞd4:׳|C*. "gKȧ|l Sk&1Y( a$YI,.T;ﱩ9&Xٻ r& -3VQ*M\VqrX2O,`P$`9Kv vڈJK0F2Ӿ]RnVOYs+Z

9/B< H8W"͒rX/I Xra`rWQ=E$ΞN?p?a2, _lB/K5KK5{TLŽ/+%VGQhVCI K84E1SmWBwo'i Z AaμeI0QTIQ&4 ;%Qd73/Q^re<ϞuCE4=[fX|<\>Ģ~/..F˚>KKsO'Hk!ʶj~dzEWKX3$ Vk#Nb&WYarrb>49'@t'tGwqoaO/l6;mWawqTpY6-n{W39t"09#'Ycj.i81QI55!H%*6S{?^|u}u__br 6p2UtxWwȗ+A2D:M<)rR+'W#.Rj"PJb(@c(O9Lu'|s9h 4͟7L) q tq- uc"왞B X@u8,xPN%gƫ ?XAsÀ AѧI"N^9"qh `M KrY;ňg( ۈѦGL#H=Ro#Ov' -ݙe2ï[#19oI/#EBE8qk>3);ٳYH]BO3[1_i0FI)4v1 gT"b s;˅D@ٖX'YmjS~\ujSF?C?3£ciI4}y$Ys9tĈst諎BKNSw >);Y@sNOv{oE;m.i9Jx)ἳDO'h1b(eb!" WŐj &`eb DCF-VG:tQ.SBX>ס:tQ.EX^bEYbЉ<:|a ,6aeF + jER`ыrBy^ s`O'@5Эn ttg XJ%O" a.!,޵uѿ"CѦ>fgv aE$6_"اE[~$E{g)ˢ+e+&H^.Μ9gqU4VCbJC! 96|/4*^9wsGGY-N{fف%-.bJ+H&Qyqh¤XOK#YCn0;a_MoNN yrһm-#wh#N%՞`L 6|O(jF4~sC>yVp݊w#מ/oVޮSqEZ<)t1f)kCMZ*W,ru*H\ LQ7 Dn8;A_=wv'hwv'hw"A.=w+~M1hJ׵vTSOiGg?$-k5?V hEPN߶v/R|0ꅰv"#eQi,XE%ksa ׆1ACy~ H#ꬉ:uڑ5])RY+OG h/*^Σ;MNm_;Y&,=vH gm,,轲\Cls.:2*yP؞duP:n\&}_o2Ůb%U,>|X*nvWyv]cAo"DKlqZP;tUTV.}Z@0[;g_/:Vtwwt"h|Ƶ՜)U8O (^ؚzlj0i}ZBl:;?_y`vgvgnN,1g@eܫ0L笭C]UY('@Cp/`v-_UКT6f?\-Znݚֵ`~d7u#Ci2m5`8ۣ㽟>z3 ;n{=ԓk{@:c79}/z7_(N$8K/,ҳ-V٤Ig٪WQƴK>'mA' $Pm&MLIos1[7UU'_U*33)3S7obFuGH٥<7Wc<}Ã{郇şξ}\ӓY~/ፍ>i4&"o!ll |Y~;LϽ*̮VY,eե*wkKr _BҪ= ַW/^~sM<֗Ov[. \,/V}O1t/^>*mP@9R=HA*%!' !2 6˹>QY`/*^_]m_]8;.XәY?5Yͣ{J^B;Hu`RX)igšAg(G’˜ T!(  )jvfg*DYNbn>ֶ.+bw3-dtL}-$ w1 iAi y^ NMNj).A:uObp"6~qMgOԝp~0NEs0^w!!R1Q1j%n@콸PYy7%_8*4XJj`J֠0>ľ-ׁd|_ĢQ$le7m{EjJfg w8.%1x%1 eZ2@gґ Cуөr6,3i2$$UB&!^KjV,7x!UH˯&)/C„:v,w`"ѢIt(Ԑ1%rp\tƔ4xUH ;.-W&}Ьi o s?-6fF(so^,0Ayʺ/d-:;Jv ^%&+1K)EtQ}5¾jBѸ"G 5ָ#gM{/Vvi7l6dk 'QH^8HmБdS\YB)RZ+FQUT1K,J&I$Fh;eR@ش$/l~H1pR(MLRuŶaEƨhɨR% 4i%1&C1jp5)T" \.ER62U ?F3FkOϊLJ~QmIl5В]_̊Z7.G}7k5‡ Ǐ/طf5+tCqC-{u|_ 2݇_[-r=M}^JTŷ^+;Y +sf0N>O>֜#!eJ^+w ?L֭΁$U]hȉ5Xa{4)0(8Me8qR%nt/db!Y4P֎B9: kBRJ$+IEYC꩔jqBE %\16UL- N9 $;"ZޠW1xqr!řvL^W#_d"(Y&V*j貳pSJE0͹za%̎$ n-Pz.Y(Ɠ<_A[# }zXCBHƅ$:B|I"\y8sE!8@9S ѹ@Nƶ\(qD)GA#GIqj>`ְ}63Kt!p[K0H{ E;]"X`*+ecH&)Fϔ1h05IHȉsT9\gpot 5u~@)dy$V͐!Y*Ĝ%?5S\K*TWR[ә)-!-lEB PW1*&])a?Af&-Pu~E>Nňɯ1K{y1ljFZK@ya]"@_40JX+éUmIFLڗΘSa!&{#?kL>wã\LpՒҁf᱒햇ǣ񥱱57hwɆK!(JvD{iΆ 3V)"Bڔꋄ"]QcȌv $ΤH`ݏ]]!)oD;%aK$}V"9[];-HڲKTH:e^.gg0hpܴ=mU콃S@P;IJo"l~PYd+J(EJ8!X$h"%a`ҵRlceJHlŮ@ܾCA}m k-E5(֑=|Q3ت,7dFLFvnP!Y0܂q[W3=dg<0 .$mONyZ,\ ri>T(Zՠl]D.k1 vlcgeLfüֈ:8띀r&y`R`LI$_*SӊX[o[ԟ-Ksښ+:ֶ@V+%%cZ d[ГWXFk{X9XHm_$C lIg6uj㽉یdvt$A:&F2l?qA/Ҷ'qc+q$nEJ-q?C "9$b|B# ml!)Qnv ޤ>cqRf8Ƌ7B0IV-cfg(xhUEpu0lӦu2AA)6m kU ^(gr +u6IXs‘,0Z]n3,J1i=$wK&awBs=n0&?vME;4&][u;\cD?THW-}2Im *[$6f"bI,HI}iK2m{ڴ2⺋vY J~isGn;ΐ$^mǽHftvR0f#ޜLOOl|Oyj0cZݼjkImHB {FA/9\;wdzQU~J+FsnցoK<8:zy\hv jr~M$o.DK9O{’w$E7]ܾB32 9%(aHTnuceg.vsv숝G_h2=Iy*NIًlYo`0Zv^,$/g0&Ѥlc~yա7|POvWk2+ѣAe`5^7Yl%j@'V[:2a 6mow _^"J_ikcfO2M)&N%T%w,|! `7>Xu^ąM|olQkq)(89‰ w ; umWh֝_1-vȧSwz+|gŲK$Q/a}+*~Ύ=mzn$A3w:B׫zUNuvZFL w/'G"D1*cٻ8r$W ~d+Hc{tc1ѻO^ x5%u%*UeyTO[Ne2 _4ZV7sH1z {|g?{A7]l>6E]W~SޯdZv+ O>->;"sR[&yVm(ĉ6a9|Wcz{M0b &ӽB[}9qf3/#@y}vS{JmXg8'B/*P+wQELH = 뀡1DAK$Sr鴓;S{iqݴU1A:d UI}1G7]7=H>s$=}:TKej_G?j_zcX:e/Z&h\3Ϡwu*T^*qNcSTO̎"wik'FcO /`߱t_G/x%rk:O͵E}f͌ixnߪ[1hՎ[ě,d[ߞ5L {zȌҵ/1!wƠ< 1ۏ8Ɵ_m,ꢊ5"q/e㼵MP!JLE)j6@6RCUX3h؂vJt3}UhUD@;X$rOr>Mᜭ:Ѳ4B6h,8 ! U^IԔɺtN(t[S;v&] 8&] y1W תt"F`r\d2@Z|)аO_qu׻t4䶫&>7o4,`Ԓ4iإ)\A T ֹ^}R)br ,rW%z%"T τ a$Az?JRJo+d j dEXѡ%*EQd\N_D[!OXyM&)]=;QX}{U"xH]. 9H_(c]L&@!,+Z p-\N8p[T.7U;CI7yޯmH1JFf4UdQt^uܔ.VȔ-1Z o&b RH4RIG S !9NB(eHb=\)[,PS[(ҏ%kF}H|QD +J6q}H1φ %|չhe324zN:%^Kɕ*lV)K @KAhc!qwO-R#&b(6xi }pi2'Hr(<%8㾛_0q_m. X-'QYr^a#ZqG@>wV;-^X%xU ݽj[IFpzec>u|mbi4uJ[ 6oV̀$|}viAC=δ<@s>zNoM:H5zmy':{RJ`7a30W%o^; ?=I.}^_۵s^SxM.=ŅjXҺ|]FB϶qgCڼZ >8Lo2ݟSEXC/_>&c뵻މ֔o6o~kfgO?_*^8(SB%C"HB9&[fPH6B_M A|*-Py7m_S.*"5:3*PȋdTҀ$o> 3&jyKū<4-6@y>NH2)^{9f%#+R#qwր^ZKIMF&8,\+0XER9Ul&kJǪJ&G ['!gO&B`(ݴR J2v8JN{Av:l y)ߝ_r鲲N_բBJ*M҂ۿ[rC_j(w:+OlݶSHh{ 6T8gҳ@%`<%([L^366۷n# iկ#I12lu[K9}7BfsuφVC IC(iORY<{a0p8.p=3=]kԧj칺`|0@{'wEޯ#.a?#.'?ӥ5sݑL/7@0X=݆cPuh:E_ jKzc`=4{`JW޾J5 "c4#mVRA,+b`bS=“[WgME_֥+V=yH[t#&ݛ$͛g6ϟĥs!= ]_>{ȫpؤ'^g.暍6n.k/(YЂV=55~ڻ$.YU)%`t~Ɏ"RϘh1P9=%uo``y\Gkm-C1nv(HFplHaG \l0 #( #dP%{WrXV+o:l笰Jʁw` )tB3,\wY_:D4`3vۖdJfʈ>@2 dNiQ'-߯#v`y`= ?[d|Ι=D2ŬsI6NmN~a1IV3TJL=XNRv7>g[⑆S/ڛ|$&v)oUI`4SX.rqO/?ߜ.n!?.n0[1 9sO>xjF?h3 ߞf̿"P𧴺,Ýge=:˼:~,sb[j1E!zaB>>rvnȌU,@%@O%RdSN;vS+ <6 r1WèLЅLޡ|B^ MCp KuAEuh!N$6ޣ|[> iAn OQ8 1TE2"t$\3-BoԤDUx7覠[18tOfU?j-i⣃AuuwmH_e60d .;;_&fli%9`%%nI--_Q#@brWbU??ۧuJ:S*TLg۳TzJNN5DBiY2 B Iď3K*c(s%ՂU:&~0GʒAnbݸZ؉'A஢[7@/ L- yTUԵ n&@2zۂ[攼Pmq92h{%kˍYz3gcXmz8ͬ'rKVpf~ j]f}` ݺg|/-Wݱ%g`3e̝G8ڦzLk/˜mN<8kKkl:'{=;>+7Б$E4H u8]+[(>F֢G`yAc-|v!!.A2%Q罠CFv GtBרZl\B٘v ~Q_CB.\Dϖ)d<2(_Rrd9g>~8W rXRB>IRR%I]VU?ڦy}*ܲ!j.|t\,'#Vgr^ ՍJHJ=ӱQ㪶$?5ˋ= S?,Al5@ m ;~Vo`f9dU1uܫ$߀4\@Fc O4754X~L(Lt0ī`!>j"i~u{>Tc\` jJN^7̔* Pn\(Wp pθL!T:SX`nBm=Kr$J4SaRԄQ2Ejyqc6eb p )}OCx'B15h/yYC-E}[ p BHvd'C15h6* =n/bHȅ2FM8}3f9q~+'=l1[킆QԨZݽ7Iz]ƅB7۵C^f/r<{dC 9n\*/Z|vYzC55)N`N0P4~_ة$H.HOY>~V ZLd2->iIes9ĩU9? #ktr n.J;Y_^n.ӟ[ b 30y ͧb920+pԹc/09.֕Y2ģg`6z4U̵ki= -:A(ޘ4zX w 1qn J;sV{? >j !Ps!$X)D-J.32 f( T "gaJpfZ0ϧZxdHVs dQXOc B˺siO8Hur_E٘g;7#1F{0r9pfmJXB$J# Ȓ_%-%89%PӖ⊹KxVAsp :s$1䜗(a 1Rqf ʫ |S~ZŒʋ6XUyHʋO")B~fۖW$Hܦ%h-E`JJ8˜kwC{-II`ihTv`zp%tN`;zDL|}L8ك|Q7Yn <ۘzյ^UQ^lA&&2Q&8^XMgfnī6|ibH\$ɟz=2 \d|FkTfoԯI?riPfJuukZZ|11 lj&d^K|^o\%j<ҏV=5TxjJ}.=l,m5bX=|_xq=g]Vzzgs{Ggzǻy^^;?SxS gr!,JdcjRp wiĠqs7d< >'ODqMl<'a5jqDR6h/^pJQ*Ϛ¯1v'eJH0~i 2asE'T/jﶍTxq%B`uz9$͗X>$?؉؉؉߸(~'dR42au.()/IRk08 ̤ <3H hK j{kowVh] !8tѶWs/]_14ɂ?_>8B\Ǔ9vnعcFndQ>2hݭ0%(X`# 0Fe@9dS97n/ S- Gk9$-e"xq TD2ms4zZ#*Tz;.z>A#Bz=iER~RLN⑱1 ObiA4R)M(ܧ|%gxc$j1W b^4I2xvNՂh ~ޮQC,z:݇( h?tJ("",e2#w,RIp*qBRapCy/;VH/N_bl(h Z CXsd~3$@֑i+]/I5IB E=)l[$DڎG]! Qs|d7ܠ@^Sp)`bp٧ID)VJܨ*y\̫FBuX%/R{ܩ@*Sj4 P J #TX+ϬɔeB%'7"P&cD>mww/H9#`Ďdw^k|ʏ7h:OuyϏ #BUk!2eM%hz v]5!6u=Dv.ć.и.}q,5!;-wp7OC %R 6|Ա]*;}\3)B{BJ_cAcO =|}R`os7ҝwr UFs{syAz_e,8B>`Q;TQ\>y;?ܾFi}^z >]X9} ڟuxWa0`H*d6r!<Zg 2\ADM?ׄ̽] Mr`(L$Pm2zvKL ("fLV.,W0@  r )}z=49Om{`Z?Bj2lY{aEݖs8ohoܢtoqaOW ^c+`\}^Lf(!74ӄ@+)3[Iy9LC([baeLǑ~QB#+2.ӣ=&Q^ͪȿkVwya/h/'Sr5p豦??|TS{m| P@0KP\Rd\XJh@"YBIePJT;K# :)AݧVe֐ݔ5NJ̍zmp&t8GrSK>W|B /^]o3Ov/pQ3ԬʇAa0sVmYrq:;oJA],ďLj؇0D9_bu  'q?DU# Baܺhnڭ,rt8^N]=4'J$zpإCXgPdA@D$XhÔ!ϬnwŅ9b)%M)]Rt3'|/W.Rrs5_TOn$>~z@+JrtyY%{|HoG|\8XV7FM|4{\Wo,V D?Vp)nr5]zu;&j:Zh럿>.jÉdft&d_^m֯gW7WO%dX~*o2$)NhdAAx*^86훫'o|im7W[\5=^`O^kĈL߽\|_E||΃|_(u0؎&WI!S{t-,39&J$sꯌPZ^QZh#1y2_҄QQC8C@1 :P IVZ]koF+q0_CN ޜ&HzA$Ȓ*v\#̒Pd̋(-;|fw쮏&"& B :˘eӷP̟UpꁨE3fݩp$ٞRs1?T1prD]sPDšU=|2:pV.jqt(:u$Ћ \Fa|B꒣,YQ/)Ê+ sRm`$!c!=Ȋe)_0ʨ9@R%H QQEi; ]gN:% k~d^Մ5Q?&*@I:. dcc~sÎ髋B\2V,#dEsRid5x,Tjͱ`5RsT}YY58UHDUmf% GJ':5-ZЈqwe#< h) ELE_rTſ:B}p_/ =fy) ~QO';u/Iv؆ uTF]e}M?at'VuBx@cNPkj Vf4b.@ُwGjY 8%b-6cp%gngED%Ŝ<.gL QACiV3OFpfR.l*ҵ2?^d,eO x 3ef|rs"j)%)*Y0̥<,.6< `oC3kYvbY.IuE_ ZP`‘*Y}q.aaJ;Ϥ2Crv=6tq"CJMV!dw5c<4]dxrI//mJ3=kyd2e\69'|sYcmE4xaBUv{;H2: =%C*Os$5 |yH+RWɌ.[Uw9i`s 3ȱ?<[w=H*0gw* [^\#$EjIM=,3l`4D:.z՜Y α**44z rV$&vuԽ uӒS*IFKLxy;+Y7}o7V=lSQ:kd!&Fs(#ʗgJ +mxIu)8ng,<]k$t)Z1aZ[&Qix)8Bl a#w>i {/.EH&.!XbGTG݆x]#7@ &T2*Aȇ ҡC/5O 1D5 Eo3:qyNr,fZhfTp?`;Z4+3̟zp= Rc𝇅 UFl:TI!СuVG]8u u^ZrɇMjlD|Jd~uI=`S+ܰO=AZ !pY6~-R+R&M_E\Aerib۵A(4%aYiG/ &4qtC/֌c:+IQivN en1qTE,o\W6)Gd2]L6|C}T 71{mY·jmVFnXڴ,|/\LtV54](GRc/KH+ zmI{*LM?/9Zq&J|>ޢ0)Io";( pݠnB+P cGBL\QlrFQZ{(FZUD;1FuvHVfJ`;Iy߾rKz)EEXek7a ьoUNUϷtQXXp)t7OB ?&-v[:K[g@#$椃}'UZJV0P+i[?(lptR圾GM:ԃFwWU'UN9* t7 d=X䤂 uBb- O5|/ ?5U :Z_ɫ丼*RUnU Ym,^U=Yf̓o<a=o2U5x˾UyB}'=͆-[Փ$qZ֞uT퓎'LC7/qoiWmwa`lY2hR he٭@4ThIٍo7zm!Bw'[0 Z?K|.O_.A@ |bt1n-_[ׅFF#K[?&i3W<+1\ECx9& 'L+W0-x=ߘ^y#\s >}u'=8xZ,\g z׶ӇuL>9};j swO_,?h_~_|~*juÕvN~t'Np8b e:oè!T_E՛϶M6_ /FُPg;-o ^bsnM-uNuiq40ۺfOȌ(Jݴ\"SVilHÐŞKHj_cOr;*kՂ {a?-/7v}HIO¥XfjE$C+9mvC8)@?$*|˸?% n . kB2) MWA _Ed7.hH~C7|vHCu<'PXfՄ7<7q' [†۰ mLk^0e6/;vp86/l~ 4Uj*[7k 1 \fyɅߋF~.&F5ƺ|o.$c =(\Q a?~w!}ȭvkOBQ+q gc F1t+?("C&9Rs܇!|BQw2&(o dTaѱ1GBP PL ĊFG&&?A}oV ̔Ldy+j>&1C8&%C$5DJ5+tXQ*\e}NoJielWs+ɹݕJrݕ$|Ҝ0Co} [dů:0BZ %"B"|V|}8ΣF ).v5h),l/STQۂ&>(=0ׂU~WDg|Y U~_f0^aL,ӏs1# _q9*&G8M $"61!UUpLV~m}dC>ne{ph`ph$g"Xh)}%"C!<5JL+7+K\gڃc/V>ι|'k[g+)bpYD("141y_~}ڻK R(i|x6óM.%oDl#.Mߜ4?{׶ɍcBo;&xÄ݇ޙy^%Tjgc}*I]7ILefIrVyH.lg'θS}u r$I&Ȟa]!{Y !$}D ^ d-q{ e:jw]jFW5zԆV8 B͊1-VZYId:{:gذ)- 6Ȃ;V S>uPu@hB19H$tLad! [rD\dm<9RY/۸3Kѥ(yf@f6i&!"I>oGPߴ}=]O'}~I jM`u?}RΒ-<`ih;ffs)'φ|XnA,+ڱoβEu7ˏ+}pNzZ};9Z<; n_7d!zɷ?67 <)qg(.TCSdž.\kc*MQgeC '+b3Ao˪w(zeILLmpeӍc~iFX 6:P;s`%UAjκĐ0/qk$YsT'"l61kEJ\ɂ)@m!fLTH>kO~۳nZ}ƸV4UG>`h#% #o~5I3߄=bqBeGx;F|[s!kѓcr ,O[:-cAiL;Y)  ) h;o>J%Z˻~5WQtš\um:%]mX}ޟH6s˟ӧutC jX&95`=TI=?hY%2'mZ_+0;tvy6^->:CGxv(hʄպcȨ'O׾d?P|)a=jg2t4v̎-( @WrûwiԚEx'ԃŴH8tA[Y#͗hڣ9}pI&[0ȡE;QmMD,Cwww,"!ʝ[htyۓh;By$=&"nK("+`a 4ݼ\~o>cnecb?JWc9fiC~{؆;?;ƶ_~7ХF~;d NS"5iզ^ѪxRs Q]o$GIA C{_ǧv$R|Jv{0hZכ-lc1B0M|0ճm'ܧ)}%9eZqߋQ _vĻ+({#R!@Ik]ceFɗ>51R'BFih$a_S:N"j|uBh.Y 0ƧQ5g!e\:BĒvw+F Ƽw+CuH͔[>kau9wnұר{ڄꖮFXמ;Fum2|+6 }5 MA]=.9yFv;6=vڽDOH=TQH-oRïPH W&='{Wfzi6% :??Rmd|z2=: $|,{*@0zF;=OGL:O6)hG_{݀8zM:}8MNb~+96>q:$#@NGYHUXsr1&B!f]C3t 8joO^K"%I?G X+pcD1zZD/t`a:pz#ʇ[N~Na*-AÔmRkBPJ>urYiKML*dQ}[!.Yve.,dm쁞uB 2Gcz ip@#7n۠x)ph kR"<&'e.08L8FcZ=!If@;/;*9j=Ī D2M'Q\,g~v^#)H. 遙cS`<\%ʋzؑ%fE\$r r.{'6@1" x]d` SD$eR}?y79+9ՎFr Rcg'wS3fX u ȱNIɃbD*J6s02[[d-4\т^ &57ʡ 2Rb&+n!i8U{g5=Dm kQnA{w\uCQ$C陶i$ )ƣ"yJazl&F\2q` &U:`RE79TDQ{d^do-RNĘ#)J)@yoKt2  {k`]^]o OQy6G5\= hIeEP/G:}_E"Q<4$fO2-p&Hơ\}{tuY ׋w|Ses钟Hu;9ި5_9ӫnxsvvzg8z/(ӳO'a%_EՒGJp.n1U'} )aoe({4_Dw;Q'M:&-'ןݙ0 ؕX+uGҚ.jʹC &˦v>`|`כ x2U-`"6 OYDo>)xpXY\$O?or -Xc&~#,tE'bBgwG=`3Ĩ;_gHEr&2!sl$8%0s#rF[:@ =aϖ3JO#QgAv4cﶯ\7ct=0u/ եn|;aqﶖPXnݕ4j/cFV_軫aK-Prء~{]/'j|NЊձ}X8~ybY0}bǫ6%?]wc/{SsZqc^|<-ne-ߝ+*.n7v>ߏ>߽,ox˽ߏҩ?l:q~nT:Z.J.o'@r[xAԔϏy-eSP'Gw. {L4+{ɕ.Fm;]T(35~ $k7|}sƒ>D3H(ûqGnΗmmLI5NIj"'{dpGבAiW I;۴I4GNt 1u7HAlIS -Zv*lӒ^nW7s6>sꔋ,s.uWP`ewūQYVH31,ne_%XdpW"o]|J0z8QLpo~E݆}ifgk&gJ?"3H~jrӅ_ E~=Lt]_]] tr,'b'(6!( ͤĊb!aOÃa/Vy|[kAXq(^G`aZxޟ7TB(%*,`vٷ:Jʺ:BEMn Kͫ Y MYyN=@ CI;d;xV4%Ӿ9kN?Hu.Oұɑv}NtRQM+ `4WO7y6d,.@Ă`# =e;.kF֓B%bR)o)I)3N(T YR(!;1TcVX.uum51cg w PMzfyĂ G&6I)I!0W4 5#aN 88JHlΰ'v먂T*2 `6+r.rHYI#p+ ֢:"ʈƚ^X$#5/iL 'UD- O9ń^O!X2f5Ut:0P=e8d"Jˁ80M5뱃NkRsȦH[OFIBzJ:{E+ [LH '!IGA: s;ػڦh{R*A ƏtSlgZ &} ^U?}~#ڕa3a=Ÿ{5nQ)t;RX`bh߻crHU{Ԯc ^U_/UVxwTկd py:PxMݛd#˞CB)5Ke!j5gke砇);`)gk)^wK %V"Xʋxv|-&  &o8OV!Uɯ; k5=0ag} mEO MxTp_ƊZ6H?O0?XBoC T(BԊ,}ZJqS437UsG`¨j m ÆEk% T6bH8mb2 6В046! 3B0 _!mn3 <\(Ft5 l}jֆ\ļ4"M oǹ5lp&*LUaT+UCw@Tyg:avVթ?AU8(kPH 33U`#r-M#oP]}lC8!8絹j &DTU3_ SbAL EYpHPD& "U-`ѭ|ҨaE0Y<]jN t`ar̶%c9bs&=PC1M}.(:~1m~$pGMSFOWvOZ}]R"b-Z x֒\-%x :׹"VYFuٍ1lx~nAS8y-H5K:#?O>.~d|;ʳ0%y{Cǻ5rFCP\Bͯmf ,_e*}A 5הzldw} ( D:ե?/W#p:kTf,MRXf!7Yw)xV1P Mn0gdyjB〯]b lj]_>/Dy1Yc`z8F\o'8?aU??u`K:dsb&MR4 7KqzTүOu%߼> vxo6vG\ s9*u{B|%W={wWC;ep_edozZ/~ \vsE+mv+z SWeա.dɝRA~onOn{ws/2\J5{DG+qO_qx]֒Ӟ~+ڈ2S@$HG(4Pi,˭WށRr&e\FhIN -x<`udgQL m? m, !ЅAK#\vt/U*1V S62]aR滅[0 ]R4"!Av*1 U 8*;-TnafRP[PdeGN X g{L1rL(B~oxA`﫭'\I3?ϧ|0囒 +V Glbc. *#.r3E n Wʅõp藸6|p?oa# 6I`h ʹJgMz&i…! Ff殅a`ÉfTdzw&<3+02M"*:ќ0ԬtJsU+{ZXyE.oݯg2$GqnL ]>萗[2uN`>X0Ε>xn͘aHTM'YBF\[\/)@")@g>$T_`| g]^ɾ.z']_:s H%BپQ8Sft N@9P~cGގ.K%(z1Q 2C4śHo3~:tQݓQa*wM=bDDKjxglhuQ@z<5T ??u԰MPݮ#R@d1/ȂU{'NowQTVQWw݊Sv~x䕋q뻻m:ȘNy{L*kz_~pɾ9ze׃lkLzhL2RIFY[i[yaAR-מ~?8>x<|̄8!w,+yXv[?%hBګQ z_’_~\rr=8_& r|EZA)x,FIO:*rQI@sHaR&ygree:faЕҕCVuv蟩ZmPm+D2@;d CN AH6ZxnReM* Ӕ\LAq*Nih.雿p42TרC*̮ / eDQٰ cI^xZH|n~V$GǸΖ npaб3jO.ˌA5V ]g3(~byRXIO/K\u$!o\D[ɔ`p; ccnm1k4nGZ}[n]Hуe B+}PSd5tT5Fjxru(\ ۠suHd%vwu~뵷FnN ԷW̅-lWJ)dɤsa#rR3}.3ϿBMw8*y޾x Eq̭4.yƔϳp²кV30"ϝwV@ByJhE4Ye\Ly4ӊHYFaQi})tڹ,m)`kȍ(Gm2.אSFiR19KSϔiC5V1:<͖u*cNB#3@'g0?\b"7?aVweSNKO Yju Hr< qu6R 1(LbQ9q @kҤgBEOo*G2 mkBSô"Lhqxh 8w Pe~X&{aH5^^o҄tU)ukވ ZݲV\͝mpwAk+3Ghm9Bst}:q}+׭+>\haJЀ1 c=7_Y*,Ӄc5ujQAd=9YM8Ѧ!ZC4GAӼ^! @*4h NYI^e:ޚ.{#x-B9Njۈ&q4vkJ(JTM/%GZ(PֻkR3&{dK@SfDU֌9xsR*k~/ȒJ/]Yue VO~.z{mpT vheQ"IK5RWo(Y\1jd2tY3}2wqqb>l ˉ]x-5&B$A2!45#r'-O.hɡD SB #߄7-?G6= SJ3MFLETptE:r|tWv 0ճϯ' lRc=cdog`bohR q&I\Ԭdjšdwb֩;=H r ow1F"e>3[X-&3Gػ1f`g/n6M2hd= g+MI.ph "ͬJr%Df̣>4˵k1@@`|s)x;'YVشq5͙[3iB1ٔȵVv+pLBs'59g? fN0$zgK݅i^RSٴu&!DT:{\[@g#6rdCzov~5"YwMܚP0õ:DZ}zpfVU˯5ܖtzIFK_UE&^6mTfuPް5MGj;s\b@w#ۛ.ӂdSv:^ӹEVT=R,LpU"/m &0ok͛ eJ۶>v<RE%㱏ū3i&5E%&)%C~jP!ыfF(.[/Y%r ;j81Ht]$i֠>ٖqgù JsAJY^gmH_C]ϴ>Kܢ8 rȒ~,S/k)>$;jF(gvwsYɫ{E`Iao_ ]RQHy4kJ޺+ǘ@.{̏D J5}ξ!V0䱌]1a"aGhFĊ*! 7L(-σJ)3 <* ӆZRH$F1O" 3Sl#\f&a X$"S3q(>Z+F+ YY.p$`%A{"iЖ' b#a$H8PJ$9<,@]b.L(ƐH`xBbF qETL@t[3ÉJH!l6če t"OBk!"2v"u4MaI#l->FQxvE&%y&4yp(Eq ➺OA9P$A( N"&da_0CʩF17B o%07<CSh^y1H&x[N ÑB!'7a4 5)mC(B4I?0dqZEBř\P 8`BFsiU#0E+f!DɲĊ(gdq`"FɁl R=}rIxUj+K^\KQ"z\ "J/,L blALAlb0sNBNBlL0ӠxY e\ʝe *VJV:;% Kt0 -,&kW~ÉҮQg)9 }~ŪBuHjxHW҅)qT CoEB#=O%Oth{ Jy MӧW ;j.NѯE_~-qm([蜸{('qlYIBڳ9Ha/r?];1QVvߟ Mw.߻HDODL1OoOc|q?,+AӅ8jBMkYkwu{?qTU2wl[Ntz[;/c\kc;4ώ7(}bp R}܀ukѳ7D7_L'08ǟ2U3\5vP}] ߀D0FNaCwO`ۃ)qU~xyxp:k%|1[ۿiuґ~@lX:Cn/͟_X7Ukz@ws7^0j[hWw/bP___ټnYlks?G/+vt_(4)uߡif Lރ眐\߹I8jϜ?E}3Z6=&o1!UqޤZ 2a=s 5h}OEj'ϥecgR~oVO40f W ܔdl$3"\O24E.pp3VWh7$Z֘5c` z[m" 8wu([`HQ$e0nm؛|FYUuiUog K,.`DaVҷ>'0sTh@O48nlg$0XEω<\noͬ9V|Hif|SG!EY;vNƑ1-efѲ%U`o5.CkC"h c\צ|\c^~ kt>aImP2YQP|iyU4G)#Yұwi!֖)+7~"-TUaYp_rbݽG`ծŽe~&"nI2 "{YuTXgz>cڙ3զ+N8BLJ1̣@( ?I#IhbZ0ϸ3Zu9c TO<$ۓD}ҟr"XPnT@MD P#Td$cܓpzD(K V"G#ʋ2ii"*vvi 5%ȽhL;SrQItB'oUXƽZN\x`c-QMkh(e0~z>]B@zs B[.̎a,ǪR=k>ؾIݻi,R3Q IgWsqnasA,^jXg oP?Ԅrq1FRY@ֵq] m4`(AwkQ`iY׷F9H,J.Zg49V*IFLSDEA &b^|ZaK R.,ЄQUm{@rm>Y1$U`~gy,((crWZWٝ'CtFWc}]YSݫj)/THꪻWLk(!:B+P4ԛ 5R!) CqfNEDBDpK XG4A*{AX*}aEDgR~8 eweۗrQWQϰKU{QkmH˞]}C i؈A f"Q IY俟7gzf8Q"H3]Uu]1gDJ⎂3eƘĔ拣^?A/y X5^NPw~4yjBqd CIJHi7w T^juz}yWpU5ڜ&1==l!>^y#2Ǵ(C,j# *^;XO'73S@Y/V|x?y HEWT/,[ZˌZQ8ٛ7%(k΋@tXR- Joo1M$R4-_eA"%If^VȮ(2{=x 饡ϝ_'*f㤸Z?&t[8Eq 2EiE4z';0ґc7cb t7GZ;gV$Z*ِq[TWT|hg(e)r)v9BUdb8qxڙ,uX\b4!`B|;O^ X_|<Y3sec~+M߂ȅ_6Lg!dvR$>\*J\%W+W+7wt;Tb2J.s\j5 liID#Fi B2w1%_"syznG ^{0>R$Α Di6SfHP"9V (q"(Q֊uT|8!Ȫ**rVEY&P L c\ l$f,ZʅdݛQ|M>XH3p -8{ur64H  :E/ S J)"~X%(BJX ICWE0.b%BAkR"i\< J k|rrnbpK5@lT+ p k(c`gL[E:a@ lS2< HDfVqq0C4) -|Q#bA660.@2hfĠm2eZE2e3Ɉr:80R!9)'oDs/ATN%~I A) à*=RF0YF L/%8"O&E,9]`$F;n=@g6`,AIa}4)d5 ZG)#7_ aG]pGOa`q3 d|o}8ChR #+A3xe! 6CI%\@ '$3qZF+A 0 )2:"63(\SPF뫎S' }S'@k2Dh EP0^odN`qX:F WVBHBB+hۿ aJ!0PS+-Lh RZ!0e<)a*u|YrOڸ[ &`=#\+q#'>F@2HA?Ab},*rŏesUg#9-h$LrՕΣ0!Ʌ1JXH 3 /E -')WLcN"+`_Pj^iK8)B0 s&8%D08}MLQrr Nse끿_$L,O“slugtN"S=~~I./>?)5_q^>z? @!"GPlɕB4g')ο6?:%p?ߏо pm Z *ܜDad&O8nyV?I$۔űmrD QmR rDz-G sRd+5OrK)B2ծFJ_!HŖ>WG&cQUēm/\q @iSb& "xI{llLgpE^3LUzqKq Pqq]h 7sA dj<6x>b#9R,&SrKAAܢrO>I#jٰ;hsV#8QJ VזrfDRJ$_[IjzA j+O$e۟U${U3CS4|K@5ZRڏ<JY;\PS*|CS/<hEPg1}!鲝> n@1\_Cwg i,[Kk~TXe9ZqҴRs`3qX;<_+´oK&57 ejYVlm¤ffmk+Uy6N~C&^kNIV!쀐^i;IkK;ᖦol6Q1}{W۷:ĞBTTɩ֝;>ՒjA( ݧ7,!RRu7%ϼEyg8Z?,֝l' CX *֝/n'-AN[ES Ym ɗmߵ ˾|XZhk~ hNZG6ND-jV+&ձW1k0N2GY'yAND9¥'& Ŧ _U˷f/.%'({T}R,%d_XknlZXQJ7(QZ6CLh~b>ne^|ؗ@7o/~k1%}"o[(TKL85&SO\Gn7-r5~Xc ƣ駰88?1 Jk"jo wKC,#Jm}qqNr:?G[PJnj,صz3 }<+?Mn>AHA#k)@y ),&SHi2Xi @A =|@'3H>`+oXLS5!H48fcB$HJ('c jA x6 U1gn \ЕswU0׫YdaTh7v&{50HB9~|9LRM珟_ˋO0SLl:yػ6dWdGۗKr N6/{ HL(J&){E3dⰻ޿/(#¢%_]x;/hP"';\ok+b'pItNnٕ$LցW5Z,RCEϬotA0.#yuV^m&;o,݂kRqSO-8_a՜*;k^)[nBw|T+^[{A"(HګKm9>:]c<]wj]>!a{ƧF:vǽ:V`$% ]/u^h;gg1 BQ$ NU- ԧW&~'2Zɓ6i I)J6{/noWhbnD[KB MH*bx*R4e|W(P="x\)JNP:\l)RK= $\9Lc슨ds;ds;]vu>ti  IdM5q;;&nqqܬ\/jE>kS9=sZєϩG%к۾)e3}\\!, #/> ={zL{cnQ`ޫnΪj(n٥*86D [D'>w[Eʈr41Hbb!SKʴU\lD Bg&@v[~:/Ǹ< Pd:5ϑ;R"أHmt> k/I%W孷:Һ @~ 0b끖!;!u.~ɭOQrɥ+q']Cn1__*}T-Z;6\hD.U4>m,?ئNj,Zqk&h} bj`FS.YmWes< w$]NFrVJR? QW"XN]l%E\ 8t nUֱ4΍ˑ| r*`BTQIF9IM9Li[\3X{Ah9+ fpu9;C k gܙnvUֳ?/g]>Lg3UWzwpx;{犪%ٴݏq sț7gAuw^ɍl\\HL@DX;Mg7ms =JRB_@'IdTnelfؿc5Ͽ/n'5@qmgVP]hMn#;=a*ڠ =hA\JTKw,HtM*IkiauBۨpX ^ lZtxp%Ĕ}~A.8• Δ1D箧OuYG$!e%D2Q)O6dmmZR¢YO DDjlNias( };&nHhU$jlY̖Dl]~fNMkڨ"$FkIBLvCceM$"{?b$Q'(p2D"լq լU V6 d޷Wdf 'LDr~o+!Bd(mcLcmq:A1>xjşQ2b5P@B=4>hQn_r; mxҩ4dM݃OQsk1̤nǕ!GS}㏛:$+X$WͮDRxX' Щ5m:lAϩh<| V4J2zbH_Bo5iqF`&2Ė4xz0)Qc).ʘUYΌEnbBc*3KvzT@M| n< q:n;@Ǔy{O5r)HQZAW 1jLD2ѭKh|0nNSHRHU֖vk izSu\^i>_"鮃YHߖAo}z$䞃wdPd섙P>Y4+%s 1{y!_lHGvlR_I zSN{j 4#̘R m~Z Fs.aD {#&^?ZHRڳggl%@1|F,R y;_4Hh_˝; `Rofݑjd\xm BKi$Ej6ZA\ϱaLaz3!srxNg}^eo*,|[+ء_m~lUL8NbňsQtsTcd O{TFȰG#}gpBq>Z,nr)9zWȅZG۷҇0y niZq{/YB ^'M1_QN;)0}]ᚨ> C;8D`iqs*kt shS%>ȶ9z/8@My< "D%{i#[<[exgM'_ ԝ_j?2xkMZDʔIcx` xI OD:<5 IٗXA;e·}Î;Eeˡү<7ӍoWѮ>' 3R]߼8J|^CXm%%kVCWЎ V' ntl%bU{Eڷ8AbVpenn W փ)$#ԯ(dDC *.RYr52n̰faQB-* ңvUH@zsjj|U׭gݨ:I`kARB#4~KdQ?d'yiBc7KrQAVCR|Y bMDK|FAf:?ZDWYǩj13Y/ uJqʗ,_l(yX)Shӏ7|/%xKy .2z֒rw3޿'#¢[7o3wtGZ]2D\05Q́K&n ө+{Y̖pgH ^\mX ;=*$B69_xrn1'}\el^M\eM JLlP{eT+1SQL.yH;**KX2 BY*J%2$dRrDdoɏ^JF%?oެWzl%_fD*Ue ʯ]d+?nZGN$[v.EWxZq\}$(GLW0M$!*aT J 8fYcTk(Mu,4KBŞ/%--z?ί Z=!A5CBM@Jkl%!@EeلSȸTHNsſ߶k#er%\e+WYU>yh)M9g?R O%hڂ`aX,8 kNTv9Xxo7蕵0s&m{ H@:XN!n}ȷrϰǿՂi ֪;Pݾ% [d p˒2 6AdۗB_pq= x ºdBZʃx—83{uZ| 7[ÊٺF<Ј_7m:ȸ4yO)@ohߏɃh\\x,^*y3)$2ev^@4hRZNMP`ݼDt'ַڪ YJ;Df NR Xqk}2z2qyZRA\Eזǯ-Ekk'^[S-YϘzi;gzޓ0 ա m[%X 1B M~x4Bi= .$$(ka`"=E` #/CgTI$ݦd J(YuJ6euH"GРKiYRqCP݅G(Md³en YMhȌJ^).-337=bT.2R*o=$ax6cO> ; N-w=gÁ̺N;IgW;/;oӛV8Z\eiXuR}ݱ\2\Ys{Vx4RSZGngTr '0}۶Ru:D3(Mɧ< e|| evKAW#-X;Ӂ C>e&4h| y&a/˜B0yʠAo?;X#p2C$h3(4G@V |khPUڵs/E0֬ԡKBeo6X ֒t1=Xo;C@ FV˲31;b12=KcD[z<ÝTnRÖ#5{[qА+2InPޜՊ7m|Nc>]CL`uHZ#ۏ(kc91i`uLFMpTCiV&۴κ{8kx`g֦}Yxyܒ֪#֚wk4B0+O=h*jvZYV+$mw(#ODW? m( u\ou-Qpj8Ϊgc(AӉ_֪;sz9dܿ o[ބ™eCc;4 F$Սʽ9jΝxb5/(C&|[]B Q+5,xM3(CU/ &7} 7MA!R"d|VƱ޳nȗ쑍17w.Ĵ嫷FU nGL>{LD)YY"`4VcE!xk7VnZX" /O{.V kjV ;/ggN  \Z_8B&6t7;d;,8-6,ȶɧ"$q` pSs7~hN?^LT/CkܘgNLGuEZLN6Gݜ0 k(+ -[֏㣰+VX ,4 jWG}3sсѻ(ל-`\.OؒRS&fL<)^輲;QZ5OdIϻ'%I>D=\tzjs1Sx]SN:#P6}}\1e-}p76f yH*V ## 3>L3̩<_@"sJM=m{FeO/@~RB8I ۡ7u6ˁL.P=]hF9XB Ƿf6k;de8#y7Y`+Uw.-prhɗ<.L&|毩Uinޗk@yp,8gsU]qlnV7$+֕&Ҏ#áiZp9d1kjW+9k |O.ۤ{/o: }0&'gjOs텯N>U?^5'OW/_^N҃q7>> '3Vn?;<˫W\g|;$٧AVH ^KmbCopg;^jߓ9dof~-~N{3OĚKߛ=?3`^kz\ix4K;SNʜ_j,w:50ps'89[uSEwԁpѺ$Ӓ^{)K/  F~f0(3Cl9SV+1jc;`L`V _/wo_/BOA/WCo^ŃOoMa`tғd]$o0D G52d<.db8yx~ NQ8ɁO]4὘L.q\z LG01+B7o~?JUNT7Z#f':#վbdI }k;@1Ѵl/LR3q@i\:\,,)5_KU[Pcnxkᥦs^₶UuNxoy!C.IsX5G% ʻnV>n+ D˖?wBPpa-nZ]|]]8 9EIuЈɆ=pATo6mnm+O7x[MDԑo*dDh<>sլuLFM VKq3*fٙY뜣n9Gr4#}j(=(M[KZfCi=˰liia$'=O0f"DBPP`IKa GaC1égӓH yQq)RцW]Xpxґ*HGqFL4gE\pO@C = J-e 3חKZ{\8X_n;UazpC/.7wfG޽{Wq7bQEǃB~w2gLWǣ)2%…'g=y mWŐ9N)UJFl52GeiTZe'm(P hb6~!۾/JN~ (cλ Ly.?>0x8n#Iﶴczo3 zw^ ‡i5;eAZC Ҡ4eٝoM?R}9 FќkNifŗ[SrId<ӷcnrP䶷.-筟bcz^T/G!ah{l|2ԏLl ˸B#;h71s1re _aEw;v#"gAdωaB 1L+O!eDUn=K8w&Qr)Sĸp$S25`LZ] ABc{GC7خ,$aa(BC$|,5eڄ:bY܏X04땄J!-[Rq= -8dȕ"{UNcQhb\v'k'LwH;5v-QWwC)o(Z};1;eX(1҇e[dգe[`B1tWdr(^{bVFѢ$'iđ|RIy> |FClgс4~b)QșL$}A#OfAKHMv Ւ|D_([ԅrxEPQc+_x!؏*'ڼYB/Meh/{WHn/!hߋ5|I%Y< 6#%n[vgwƣVTX$Oa3QKÍp*0.b}"69d g A(NzF#N޵wwNBPqI +1$;mGE0c2n*8ny)Dpı77 Cp0(mT/_x]QU{my*,u#=z"q7q*18XtTuu1=aVVYNH"(8((YСVe;)tAtdN@ sOQYihcj gi(iVjU0{ )%SV (#ӍWU#H1vz؛9v _Ͷ^+=P c Yv]ҚbvNĐ2O)%Jǭr@" LmP"5U1񼗽x=>y9W*E-k%JPpYP ; :Ϸ;וeӷ[4C3r쁝Dž 9ʁ MB\ihyG Lj"bдwKE)&^1PO1S 1AQ3)qB%^M7ipܤqSk80 e:(- #*1 $~ZSZ %6ԩA6a Lc0 Ay$eiJfT<UqrDQԝ{ވ,Ft.1cNBbC&u]TWY%9^%i#9‚b]!dcx\Nx\W9EDDxDDW^V =`kWaW-)Uեc.>5_fZ\oYdcuuq/?~Йhշ/vB::pLN/Ij\%QXwZH@qͪmY9_,d`mw'C:Jƫmo7enhcʐή[nyv$axdַqBٌ嫧y=9\=1(7t9;xTގ? 8q A)hcx] @ Zq .${aGaa LLyȴQk/=PBlVgp G*phKu8+7B)h:Z:LR`C9K|@+=0*t qpd+ro*XzE#Tgc 4{bKN|VeET=w6FlvcW.BUt]Oae6@qM4==pf饡*;]:,dϘ^dS'BRsш`#N(eg #u<5V6=[(%ɜIvIa&4u. P╧yIKaP04Q$`5S'YN(5TRs}v2k"8%@🃕l%r*)`l 'Hp̶[^X{lbj4P'&7)yP)AaĢ,@=w(raRTiK‰F$*Ѽ tY!ќ9'2Ǩ BZqV:@#i-4dlћ;!GS:9OK&2_ZHE.pLX{5 Se[^ﻢyw-|&){nkoO7)+|դ*--;~}[i)C|xw?_J8h_>f߿|~Yy`l]Gc_>]^Z$s"}]<[m|b낐_ܛkF<÷|plKEpCb/~Kir$.Wt_?m/]v4j:0iydR# N|/M: ͏$u@'T@Y:\ ~m<EH\>kh"ǃGtNISZ=]:BW9]^؈Šk^ؿ{>?Xj`qbCF,͛>ۣ$A!Zy}D%-#hm77_(E[nykC40K CCR/U|a ( +(8"G'Q*͘q<RCzv ^׿nɭ[wo &J b&&rpv^4P6QG]X:mgVw ˚";a__&RO_슝7_|b._}v.̽oa_:cˏu.~eV/,MpB`#Yw.~N%sޛd=qavpwo?Oa~+?, 띧W4+4'Cn54,; 20{e̳<.޽eC;盕7I,aSl>ee)c) yz*+' Q~97jHh4kO6H1 6& y\ xChd \~ + z2p..fPZ|g<]幅F9AJ!M_>]e/ RҪjHj*49-xttmal@Rzl#(%(kIٮ;%AT㯠1ZgPp.y=PPI%YNX.(En+tѧle6pK D8,(%KaJ`+Դ8֤ztvpq DbuJ5 vR#z&jedy. *t-뗮7b2' Qi@AJ1@R q&f@!%0ETAŻC~ߤmn%A]mčFP)܆OviBt̒5!5ӵ<azxhqC,aثHBEb7 [4rӌc'wĆpII@f@Z] gttYi^E2K,ulS bD-PDy㞥] ԍll6{3ib{ 1URN5>}9Zj=FFqPR f5D}h)zf:{MB]x^_HFTXOq߄}sCPGLRr149AO+g+iLRkcuQEPз!yRo}Q&4MB2W*gQo +SolPLA_hwZh錈Lm `Dby?Ċy.%6g/3f H-6[4FK%Jb ,s$@@@!tqԕ`S\޹ww׌ PDvN5'tqą[Mi 9 ^J|net;&(`)SqۣoV⤼7ZaWMkIb: J},ژ20(Ձ{fylHk6_i"ww|߽vHf"sk2uL!F2*pB3iW!yE5) U|RόjckR^[},\GWv1bjQ8W#qN ,z`Sw0rP!r}CvZ2AA}ܼ3]+Nm4Πpv<./b"hwLF).t=PLpGB!BJUIGPP?V~)9U!ܿo#XjfY!~0X~Cs'E%&ܤ4FrSM#Y!Nx8sX,*8.5P*L( Yd Xjϩ?\": YyLjtѵ:qlJS߬rQO"|L0:îS,rjj*0d͚=ZTWk72ņi7lBd^S"*+tǿbu7n$Z]GQZΣq).QĖ.ƸC1G1gPf_SPk]?؄AI`CMMߓsL*CARTI1% xpTGB1 @Жl4kEuBN׿Du[JU/ZEAMQa(Rq?.ZA-/gzH_!0;#iF^ fÎSܗH07nf`I0Ū/"ȈrV2.-ZY5 ;\vF0W .Ln4[~&}۱YRHgBkM-2\>`>AuǢm`x.׎>#|xZ~LYg4 "vB"zF)*V[;TT|oހh%IJ^1VB[] δ%SgSbn_%{#?ԕ.MFՄJs.AmV^%ٮ9faaX.R hu&q 'Q5AS˙Vx"K ^0&,jYQʣ~OH%[G`PS|HQH,OZ5.$Lj.p#(0NhhEyP:CSʓ̘@WR@v)&S 8LL5y@H;֐^dT%7Dco2j볮3|+Ie&bDSB5J]r)% {\q6Wd(VoQ}wMiysX]:gdqy%cJTe<6^2 T}lQIӇtnLЉX~hܒI)\JT WR*jK.\nMU狷bibYF]*se6aD243ff_?{,rPNu庠Z5ouv͍Xv*aILiB?*SGo`9<+TΚC!ݜHC&^5'2u o"P]~&=߫FDk1u)GU))9׶'TTW㾭щ.k!n qx_}GzVMy #*S {T!(">@Jbq1*R󠍽ZQCŨ_n*%!>y[h?9řo{%kQ׺Hw>UsW퍊(B5Pmlec OύhjGgz\4/Y#҅w= b*1Ă;GS^vx[yYZ3v5Gg. hW懋@ɬNy<0u7iCMf--h׊Qs.%ĕ9lL]_{{twkKә:'8[\Rn{9+.%}8a8ѤrbT.( m}9$P@w< +%.D)"#C5ZK9B60aR}618:^>GfaP|qVN4Qj*5=i>}kZhLc?VBUT}e)sg|*.ub#`~_Q[2kƟy]v^We;j;qᬑdBVGe: /91JOC$b9G>?[򟷣|-z;*>LG kghB9JHXy/84~K $C~L&CU;P|2p@Ig4RVO4V/ӹUs ZEb nq06d \ w2FDm J^(9_6nvuSD9|8,@\+ZIk8:)񲁞 ӛJC/F?]LE͍b>oaXh(4HO4aJpgxir%6J71690:oYF0a?A ג#C2N:-w $NCz Elbճt!,ӕ5K:>PmH$SP"aIB4P"B;HEtf/MpK]B S ^O\Ib T6F}@ =\)~||"8YP Dj ɨXD4>AGd^DDUDoSlhFd7F6]Ɂl4(8&56#ZS#KBChfE%DˤO{l=JУՃ_y<9~M)w20O?>^= w] _{;>'*Io>\0taˏ6؛;5SaEʟ(e}sc2=rw)-G~Ng~T9߹{뇇[$pmf~{tOeNTJv%}uG.pp2Ef{<`Ai&J:ڗBe5PHX#/h=YLv'oBj7L):J)Z**;@2gl0Hk)$ ,25V1>;"(Mw :d̂#6b G2]U5В ɪZi$f0O// :wB󡝾;0O9&!RrS'd9!R ;Y3bdקJ?n}^]) ɺIqM͟^VNhu9W Yo=ʹJ,h/K~k 6VM8ŎvYToo:˞Pfܠ].r\ M]" `&5Cp 򩟌zϷ4;13_Xu1x7v1՛dN(P+u 3&pOv#7~X {gh ˿E;uZŃ~MqyrU֞9v.kOl<y +6];7#`E)p$rս3s3/v9J AGu+>eO{I. CVfmwmoJ X"SdʓǏ{׃wۻ0&匝o㗱G?7CCc8 ;E(&CX bSLvNٍAS#ucpMvUc$osQITyoPNJ$Q8> YE>h4v4= c 㷘J( $(,A 꼥[oT keR}LXɨѨ49g4:Bd4\Jo9tC0j.!%f)?{㶱0/}6bo=F`@6m-4I?ՔFCI)^DC(Uu]2&*e){O%4x Cc8ԌB7-E5z9" g)Bt8tFS@Aj26aeR1$@eʄRY ГC5Zס QeJGі8eіI'('F[e hA*v&\e &Ռct傧$Ң-t25bTKtK@4 H j@eōA:0$S]H w!n] 6۵Q,f뙰z "n qIÜ; Mh$HӇ,HlwT`d=n2 7 w?i y06wys`4Zr7ݻ 5o(2?Øo|~V%K'\<$ uEz8`JQ4ݧx݄^Gmz[24gƛi⤒K7iz38kB?GiDsH)]Juu7#eWWߢ1WGbh;_-Q%CjKd7cfޠ97V8IeVr: _״6 U\hn+`T&jéfW&6]yF0T{z ƀkE Dd-Rۘ;;Cl5fk>1@ĵ8A?~,^G_3ӵS'8BfVQW@ iN H5bm٪^[:!D ҙ t9nԘY1MR=:MaKΔ;k)B:m< =P&L4IV Jx+U-O&K\BX0h%¥a&IMYWTZ S)`/#0/pS7B]i9ԋ 9kAzeX fhM.*vg(ql`Ŵ|zn ~sb{oU(J[ 5ߔCٰF1Ոsm!mFROOŠ'Uh:H9;8FqJqI%h6>1Id12p\>ʥR@jɄQ>qŽJI=xzw~ hr$?($}\$֍1cb](zo"W}jo(iaŵ+1&bS& #[ >uYHfn`xʝGa2OL @KIQɅϘ]q8`5e.'U*CNY>pAͽaR1s0 cYo(XrS@LcbxB '*},IDx`M֮ qclCz$ca0V0 H=>yCh6YSHX&D$)3tLyTȬǟՏ̽! .ݛc@HOM5نz5<{WzZSL~K*M௳i2AňǸO{P1(M]Q3ȍ 5Q @VRWXF_h;HPuwzjxUCY~@u l /3kcLǕ[5ʍHswZJv];qb7c̛y*Ai{gg` rחm|.pgV=aAq0Shu9vׄ"[(:U fܧF M2(ĔGERtR?T2MuFUvXRZ =I;˝S^f8ח{T!SEQX8ӜKTh-KCX%L&ҔJb-^քݜ<'r--,pv[>jBRbp,5^@Օ鄤 p2 cxJJ0%D֢[NPNqF5U`\ ^;!uJ $D"(h kʢdf$Yv ZBO2Ų,Skee!DaSpK92Ò{@ͤqJQ48jp:K.CgT$DeDkBk( vw/zݭ#e3%3T%拇O\8&ڡͫtL-,kXL=8z.d> [M+{; 4PqVx-fnc"(J sˁLg T<, }T aT(х$*4F` btzDɕ9C˽߈j܈CGWR@z*XvS¥:,+ ,;K".R *kV)2HVE7V2" |"a|8ՐHe jeBHUC8b%P"欶=[vSFH(5DݮI&U6>OEϓl-fhQ#&tqF17n3? _-onv|=or:Ϸĺ%|f?ٻ`g vhg7w@#:'ٸrt߼,ףYN]عFh2cķZgy3JPq;}'LW09SGըi/[Ǣ m)B_Й{\-pɁECUçY+d5馇4+_jA9-T& #{NH-$3UgꠧAgP^ZZ}Ƌ 2Ҝ3[tސ1wRVKM/$5kCTjg8D nHKqif?["34$`yFVcMFf#⪚QQyh޳R3Y+ v.]Kh8&k$ڕh)IBƛ0 :/>!+1ھbB1E7&X_;h1A6cIHWQi XsS~oAWܫz ;d4}ƚBͳ[?'`O 3QB>,dmJ6,WP?3r)=%QaS8i-=kQ ݪuaKhx݊:ڭ)1uOM5NnmHw.dJ¥%7Jos%sigH5p WJP̀h̶6J_E[;V֧(uHtOmKS1̏L`X.KeV',9|tJOt1rkA htjb8Mf Sg(e9$AaQZ=d8{-5iVGH4rl.2GRd^@mgCHaw^{x}{Ҷ:,▏Z?.hzx~BMt`Sr˨%T U4heFۯk;\Μ ~AB|2 );oj;f^{keuݻ?okهǻd|\ d`H? [~ &F C!)Xzww|dٚ,{7[K!PFm$j! 00 iƌ!YьRKI8!΢"ZF;S(332eo+/u5q;uĕSԤ15RFA=H"KS#Zކ'G"T+멷F;5aH_*Iw9  *K|chZC~>+!OvLͫ/(54tRX7><Ӊ'5]S:bLj4eV$$^!K6s)䯫uU'I 0< JX,gJ%7covo>/Yrd&W}<y2r9bz^13ii?g=o~}/G^a}}B(TbP5K%O(O\3E-GrmB#*ɲV1V*`=Ke8y94 c=tWA׾"=k35g"Ub@`/nw-RW w:.o5ζ5 pÌcPQg`1#9 &\w믛{`S}=P=β>Z^z6?3]9ӋH&(" F)o?կ]feد2/ Z TL u&x92 FPնr ^krٻF$W>SRGztx}ɃFV-$!gQ$%ﬓE5-YUQ_DFFESe3ke3knkuuXVN_> Sٵ}/tS-US`k:骃1ڵYĠ%TԜ~~A(Ꞹv*ˌW42zI@ֈbܕ:iX 3)%s /*Ց+8>S m}V @fTL9AAvLdw# tZT}r #%)2:&[|?Q(ϧOMV.),G@(гmYNj&婔ZblUlV#$RrrYJ?)~V 嘧0t c!aIrҁqza( VT\,K"^&'/[ϩ2WβKͽx 5kgd*Wy(T+a= s?wER(iHĸ P(1Ƞ$0Sj52+t=CS09Z`+$FY3"GAk8 )`YX!<\·#|S항BH`uf$F[)Tipp) BH*@RL7`k'wk0ܭHJkje#G~MmO$Uߎ+p mg#]RƔ^mk]y9I " )+qiF19Z8&mOH6EhX +$fL=եF:f7Tt٘rqͷgg $:zAT7׀"րno|LӚݼyAHIY5ld}r{]ȵo|0-֬ux!~V&B N;BF)+b٨!%IXvMz=lz 9yXsRΖ3FPV8ZmCk*D#%W>/{_U$DI(1IfaO62IG r. hXbû{8&AX+  A "h' | A 6*8S#eཏ`CLh)8uo1 82`g< aM ӣA7 gՉꁀI\;|BP*[_USlkSTj^K'+8fT$:f ȟ}$dX3(ԎPez<߼9FRmH*g"A B~ed24d=%gfg_&'kbMR7/6KnWE/#?z}?YL ғC*5s/xY5:hCO5*i!^tPՁsBź20uOs&ڠƳ2/ܑ7&'ᔰKQStSyVڧwBBt4yZHn·, ᒿϖ[)Va^#.RCO!I3cQC+*w*$;H7a*YJx{Z(}ݧz/8™AlIjĂK j!9/C1sJ]_ Ьnr (@a9'E&"(;xK?[Stdgc'7B9}5&,2{U P>4aEtv7wiګy;c;(<ːˏʏ:)m84+>.F^܍zwi6M<ؿu-p /TSɅF危 $I %qH2( PnU:;7j€ rD!s9/j dUQ<59od8|QR 'L=: g pDb6qe |v?~3KYUW],f_z3_zn`m\Ta}|y`>e0p< N(=Y -c+pVFi>r)p"`EPʧ2MJxAǐRQ$HQK"h&a:c@Yi a<kٮ`ށL+K5s"ҕM*W8[Hb9I, DeCR1YST[!(a2`.圥֤dH*?$Rr E=o_f1ɆwO ςU랛/B.oB&ȯ)G[c~2z v}i0M^U$$yj{>z݇`2~>_o-N?=^-~ tU${ g >-Q<]^W]PSM?޾#L;n*0\0:ř[Yta5"|ތhV&3Z5Hl`&AzzuvE۹^=k 'M3< >9.b1w%dSոKD^Q$uU([hD⟿(<Ӑk=rH޿bEM.1=ru;nju;ދ:G A wWW MTK%GklSkTH(C=6ca4u-F]l4Jƨ[F6;e3tT("~ICNf(֒49) LUJrc ~}21P>DV} %3kN!-N! ;|y< V5և*%!1'ZD)"}kW_f< tz!"IVfspVQa]aRR:Br8"ȦOƺ]?6k5h~=UٍwOCb_]ַnO RJJd.\|4⏊!ͽv4} Y&'-*+RZӊI/.)ߌf7Ķ'!io4ţUy$@C~ {{ -ǝL|`yz6 9!>x9oMz-/5ά=Ӯdž:~1h;MaqwXMyޤ!X5G=m J{Yq<ѹ4}m)!\i?PwX&*tq'!]{ p&펿0+dty ѨX/vrո)Btu"+-cZ[{]XCqZN&N8ٙ Ne`$ 1;kN;'V  BBRAtZ]*Y::b@N됭N@ Na"eJi# Nl RT J HỶ:J08 ŠVG)a*r@4HwK?OiSO6leA@"2cVs@ƱH vUH; $rϫ!pq u'!.OzX~&Q9&Ģbj[n:jk!qx֔o8k4֘ҸI ʲ0%1 A "b9TdbGD~x_?˰YZZlHdчP/J"KI<У褾[T'ȍ"nbDլ,YڢXsW[YʉH:TNS4Hӟ"nt +a=NBGCC= ]! N$f^v L 0*AɂiP Jja3ϖUw둺/:ec!}E\2Cʴ$ȷa/{-$ԟ)-\t5Io9V[pFWf}EZ.^`FWI%A+BRFd"}v9:O*ZKd'7q JUS08Xg(9uZPnr:R DI[D/4[F5D @BÌz `#2@B}N9 I-9~Ku$Vts5Q=GÂ!z GaX_qL&d9Pi5"FX( VʁZTt:-b+d*92Ke ii| 1&~U-3%€Q/Rhv; gW^oFղ&aFz\Pwsn⺛'̧g;~x3 ebw~˱ 54 & FUA\_uՃ1 KEL8Ei[棐i8p&`a&;w *;x;ZiPٙLnWܯ_N=pk<}̕/~RС7խ-~ds Fp_dv5u,o-KLuFO>ñjyn+tz#;T<ҐrͲQ{78R#z\ RL':ޭC~S:nMnMX+7 ^(Q=GnNnuP(JnMnMX+7,(Ls1H16x-{ƕIm{쎦z&,䕛6Eh'3tJDt;ʪ\j8?[d,ӻſ ;=~{qIO/ǣreXL⩺\y.dǯUyM,!dD'sA>U W>H* d#,# Z !m@ "b> PT2f4>܆m|>T٪TD\m`쀥\:4 U(R']aQ!S9¨v\8[&1U~>CeI x1 > PYAC%!|jWIK +Kýq,d"KzI XRroh+V1C6X0!=)/)li=XUY1նʔB cB8*i4eƖ<-*<]]q]#j>0fa/CZ VOeA~_ xIŽY<y/0YJjxSO> %C?h|㱟Q}'\Ke/ܼƊ)HaA.c=">2bB8]{A`1ޮpd:}{?r>1yXFߧt֗LZ 'ȡs|RWsܻ-Z,&$HS(H@Q;/R>d,Ui(BSwbjG8"}of/\ '~>t TB]I!3胄܀w7wTi-zFsTecɝ-zKhe1K䠙R~nlO"p01z .!e7^E0 gWJ"! +*.iV!K*)}gh(RV;h/â[o)@Y5SCyuKVs>PG>`/I%`ʍ G9)S$ҍ恈&?&X+0 ) #~Hn,ȡ~@r >qkj @RRƈ$^Oa?ybRVfwK a[N2-(j~iۇbc?3SLb:\ەPEdk42wFIN<%oٜVj7I ÑYǿj2kLEmᤏOa CRU667kUD((]zš\=DBVj顽=fNb.i~I@4eN-煞f N8 <45:hܢUGc?ʿ<[ _ :)8]phJzVZ~DVt *NT8,mYOS(qJh4XR7X.RxOww$ hㅫQaU!ycXk/'&}xc]Q4ݽ7YzS&{ımv7kh i+Q6 9=d!}?x'6G.Κ¶uxbﴇs8n2Ycɗgr`2C@mkP?SO\yOZNg&&|RAxGURґCzgTԶ!C*d?Iѥ ײ[s й˞UW1`r[-biH~_h!w.U.* >lɳ19j~8g캳k@"J/atWō0+sH-oR[]+| %]F؃(lnW|o G*]#wȭ`BR^^wJ_aA6À?y} f$=Fy;XbKZnLebUKxDc-vynsn35s 16SsGq;zX̊ lcnGޅ#9mdQ' ˹]a#/oG 1hjz5_ͯ &?VIn܎Vn>Qxع2(N:dz|lgٗ$K;x1 vD ukƚ]_ZYk BvTq䍴/'o)pb˙.Xs  ͩo^%*6w*;0BZ熱LI0Hxs:dʃ9P 6YtƇwBa;RD9-"ǞfZp(気"#ϑٙIbtPD[gb?d5< Y.N!#O "ef:J0R)4Q%)nfSeL(12@WsT-yЂwa4,L8.%b Pb LZc_fDe vJc4t3EGqpAgZrK <%> ;hA TA.uOF|Apϩ_($IL|Tj:н,>%@ èM#YAttC6}F/g#Ylm,C0 ڋq2o9BHA,3 f*s%T3!F]c4i^[&;M&5JAe9J `pk!n"`K ʰ}^ɷ!>psшDNϴG` j_R؏)˨;2% gTba#f!h()oDb.DIjrF`%9ɰe8J dNQi 4L=>ƢQR3>(/ S2I„%yH {PgLj+bX:GH.2tJch05426+ $ F2N,9\eRj 2%ϛY휐/df)u/;*o3ag5/BRSI+skUHpW#ܱFd ͜KN2 C" E;$UX}V\ ~1ri%!my#~sz`׌ne[r xI1M&z.M yr$(H,7q`8LLzml1(n߶Gd ɝfq3؛oIT݇b5;.Ю@ u6%׈#D\G Rfw6[\Eْ m =Yd !dՈlK*tbḖ;,&;1CMjV 2u\ϼg  yR[!HNCkѕOC"JhQ;? ‹=${Y+QZRҬ;Cz;ޝz-VVOqϕbZ=i<گxz] {aV{0$VlTE^!/tb+^q([F 4 )jmXq1[F 6 F?~،h*6 G;ֿft4pT.Fst3i2( wV~4ѝ;聣"aax;Z4s33ưoknDPK65N{mq1վk^6!T6:iHRZ p:k Bh.c5Xw%I_ ϻh4[mM—ܥ/ k2BmVDh`f5غL=ȄF|Qb̚QJy~ 54^[ *g494U#(Rolb˫WUhCʓg29V@>-ƓlTlD~%-uic&.arJp;xjHb[Q쩳Kdt2r>bN/`;ľh/fHwWaMۦm˪/GO pqx% T2ٌv%(A"os?T"ft_(AoF@^39_!apRJ:fm55=x:|5b2xehEB2' S+20;yL(3,q>* ۇbjw -zHyi+ 'Iϱƀ9w\;1VNTY2*iuM$6SHXRH3B3yM=.WOqϣѷ]#*Lk-fJ\10{K!o D ?ت"e#MI̴r(O=}Hk;vp>~Qn332ȣ Bn8՗q6YayHvWYh.+ |CSբhΌ-8LS߬+q#$n=B8x]9r>jw93\uª_x){`/AiItft ߹:ےf2]oz>>ls*K=SJ*~eH>@=6{S=wg0B!SﶗT_G߇%4_Os?Q[ i2W7W'~LW2/\y`oI7_irv^}%c炶*wcޚ; /6pad _b ߋjl]YN$ǠbXeπX1hqeJ5 b^SԀRw]rf[$WSS%S\.18v.X(z\W:ɕl5[ E[c7svͶ4!lC I k "9lW?Og3 UY jYM#gۿWo66CV7&T B+J<_߆]} :@tq1F+A5TR~tjeKl6/j^bW5|?a3dK&iRnGуQN]3$cYsb0ϳ\;/mH =gQ kKdƜgGd`HB^B`pܯ&wm0hCVnGBR?>?.gAP__vR>\¯~]Uh]5Fd\9EgO_brúhUt; f w=>&M,WVg:<@i0=O_k0v#Ea!m~zg~g0>|E]#jOr??t 漢v%GXvӡfO'8F*vUԼ+Ҝ Ù'g %HdYʀej@$Eù2ܐ$DI-ӞvRdHJӷS.g0Z嚇TK2ֈ47$M6 T'UTz9B5EMi04k1/!#s1+--6! Ǩ/qy,gqYM]~oHK*V+Njk)'n[J?<(ϼcȐUbԊ?Qz`vײZՀ;[{7b[;n-5\IݫU[i10nͯuA[W-BUϯvMMvϪ!5uo;mFwd4ԱG.U&tƒ2<\y!dh('S:Jk9ou0UZ+)u `UʓWdwt*8*}'{VAЪq EmmR6i43f<[9+ԈcL(G“ܠL9C9l@5H!FZvg[jZ w=b&2#G. &2<cZhXKC_l&La ^F]Zbp R-i)bT:O}GbOvV?=&hL r)Qb#:諸;;Rvb-yڭ E4I(X)&.\&t_mR'& E4AbGMR[*1*ڭ-wKhvkBBr-Sqvk>"[hL8 `Eok((Fk^̯\=H.7v|^{fe^zZ4Hj7rfȳra5qqS0!W;Hzv;wVNK;؉9÷3.Jx* )("S C) +83UA(*::ʿ:wjkgop_m?V-LkrGںdfH70nCTfӀr{ʢKcP\mD-F@%{=ucuW9y,?H b6;_gubB'"|*8dԾ+rdX4U˚ @ز\\9.=/2lJuo:iEp9/uvжqVםE:(%DERG2P) ^6,s&XV<ld}MgOQkBMזV[KKK !TCDDcM*rauiSL312{m׮}w] RCR+czeB>>F!;- '!a z1,kK}W]7AẶ<_fwm}zp&--8jQNJȾJb!l2, U|Hu\<[طcLZanhs&>}ChP?9^ao($䡇B]eN>iw e \}؍U2id](]:OUI4Z>ɛL/e'=<&Gcq(hۭJB^ A PwZ&=]k>Դn\PךݳFXɂkՇcB,sFcX7TiA3%/|D C@ɀ䍨›D!KsTXJ]]J*+0@1e>V jRInV0bi@MI@>x%KPB9LS I[6e8 -l4kZ.ȁ.>fmȩр6b BZ|~_Ϯ7݆nvt|?\sy4i-w/,Db>4o_ry}~“2EZעߜLl~6~|q1;pjHh÷'_pFrlOQRu`OwUOlxpFOc)e5l!{/Uk2YyVT,T%PVPT: .Y%VM>Mlh}ɏ<߂ZiգuKu|_-SIh*Z*T$O8[n~gL,XtfdS >n'l0^*S)s*-iVz Y$"(o汉ۊH6ӀSM; D=8o8kG2nW}hC,a`c>d ދ](DǞӅHCEd{{Y#q.w&)!N3Y;4)߹)?ŧ5x#څ$3OuQ~ueFstTz̀*Y3a%t娬b5i' W'>nCZ0VOXIB;~]U:2;޶ Cz>ܺ-T@c'],aש$(r'V.RgwBJ[JOF1/o~hq[OZ(1§ Y$v:!5XMӼ1z%щ@s51BT+Jcsv99y X绍yrMׂeniVKUxjG >Z$0pOeyQJS!zK[{R̴zlgWbA zpE%GNUE %ԉBU2b;PœgJ_$ t5c[ p_|0oƙ퐣X{uquΠjɈ[KTf4gY/]1uVge}|tYZ4 ]E_($;C<"{2DX"ӆzHn1C Kl"SUW,tU-2,gC2 ^2y&;He6ٳzrqwb4ޗ G_xj*ߘZkq +] {1LA14 tq /w?.Hd׷_)FPE"Zc!ĽZe9xԾTjkρ/,:x/3G3h s_^\aGx FX6z,l}ݺ2T EGX6Nbq?l > Y53p(?ʋ!~UoYx<෉U7,y3J/>+LflJ>3]Ktu{WhBsXhnm2]>Zoɠ"4ާb6G9+ta6|pEǍ+!; Ե|Ŀ RP/^QQoD=J" Ǖ 6I郼gPFat:ͪ!hA`BޔI(mC23^N.nϟ$Ժ"WToqՊf4kWKMV4YgaɔUd5du( /cխs޶K$s]&60wkO,ά8w\ivF%YlZ4Im ʜP#ivZw4 8dA-h@.chN^(@{mBzr(NpfS_8R 1~h38YWw 2mA c5^1&l~KW/3tsYVi“=O[?y[N$:2@"zClaz2/vN@:VTN|jDYR:YM B%]SmUI UFQ0dwuF1^yzQ&FD\% ZY$K+^ְѡ O F[Г=-+ HؒUHBD*+ PʦгJȎ7 ݱ \NJp81^n{6 U!UX=J{R;G Eer:_ feH"J&Wje*3$堰"H .!k>+$2Xc-4k I.R2,-$\`O6oY*, |"J eI^G~lvO>pY hmJ^loؐT4U*.} + Z(XǖѠᇕ~h| UqI o fDX.һ'[-rA2Dw/4-&->yzhZ/?s27'~|q$P,~O8[n~Nt>_]5hcNJFt=|{ry5Hdudj26'yC8bs=]q}^Kнu.g>G\ Tl a]cV(I$ەe+*/AkaŧvQ; KЇU>PTJMU/LF8bY%Fh"itU^12`hy/?`gMWx#Le֭Y׷yWvvI&_7t.vH%9y;կk#h4Saoؓ6Դ ҭV4b]3C Z?Qի"i+$)I al-֏ڲ&xoE!(cʶk̤P:) 1AGʰb˱ƫұ )[%XEgONIG#GKV衖K1B>?aAQlf,7.hVD GN<(mdDҩ<! J+'hy#ۋlTZvb2r9 H #OQB07Lߧ?,M56`sԶN/Oהy=u3->y3ַsq1s%v@h.W* +Y dP@ c>%|T$ lEe:wHXi_Gр-N Y?[_m+S~۫Wa+n!WeEx4ҔqF$r$ED_7htO+$=Or#43 D:?n+0Ya&*+PJ;s Lf ]`q% 9/ !{"~f|_PYƁaL^(A3ZK&x+mI-QDN e Ҁ;0$'R1H# A|]k]o-n[dP;N)]/O̖ΜZI5N4xI&H@ęgJ|t!TQ3_iܙƆM/UwI)P` 20YY^DI)l;[5~&!Phe阅x*O)rJx *aɩ34sL6瘲;'x^*ƾ桜@9SD(*K6pa'DYdI$`ڻOIj7( #Z0TI b܂RA)?:N݆wjO| )tetФAjpvSZkۼW39k /tLFIW*$uAx*FVT~m@?xR "Aat`_bwԢ4 DR&h@*,h€KJݠ;kc )Nqv,m"bLS/L5X9ۢRM°Η3%zݲ3e3,B3?Mo݌:0{W_L՟o?۝%=“)a`H#UkS%|a˝aErg܇l5e\Wܷdv[EeyJ$WKŅSKQ(Q+=0-(W0yuBmA8[׀V(0%[0GEwW&^:g/-]SzuXj[*!:Fvi3@օ|"$S5Gj1}eb%(嘈=eOn%ԈLcZO^ß/RFA{e{zH2}6nkϠd %6|Z}w]nUu? Vw{7__z& zd IIRoR'j9@bYgv)=p4tbzbwX"%H êqdλ-̷O!#"{k m P"(cXSky4BMpՄeE6s'2 LP\]?~e:e=XflE/׋̗[!跌q%Xn;w9`}׳J.y:ĕC u-ÉVi#'s9tȅR=AM[:$d/f!dᚼ1m2 DbvӬׂN% uEc]b/a2w1~-E)`rxb  仙LJsٓyQ*֗ 9!'ǶZyP3LB)a9+3qk4RD& RH^ Q91YA,€T+:䨯 0:L&)!k뒒m=!wn]-fV-}㞐O"3#kI /վ.(EʋbIAm~HI[~s#;*]6['k Bі*c[JN"T+ &Lp($UWJ,+p!ƴۼu*ղ0֪b2{`(2!{J,['s``9aI.%| $.0GwjnKTVZh-BV`YI f )&RXY,K`6fj0G*XAHn^, UX߯u{).ćPH?]3ҞAvJzALEUnpbVآRyϊ`OW#yee+m9Y3wPCxr?#=~\7 d y>r%V6#!AX1jE{Mb:[G k:«cT^:^P9r'ɰe}?okW]Ju`8j]Uԓ"*;*ݛw\S0E_]B @x;U6GʲCO>M#&nXi0M$gyUFLaZVȍ̩"ϥ ^ &-#Sž VZ^G_db ^3Y~?w* &CcLELsP!8aaVY!]0jr雘PĄ)K%- l) 爚sΰ>Per|L 5*Iέq2bii"ea`m g8gZXRO5P2)Dr]Ğd+Zϵ9{\49Rѳٜ!/Et?D!ͅQ@ Lj:U)ەY='}1ZvAa *X'1F ,FL[~9>9z iNz|EqHZ.|y/g`3]/9*₍iuxo8Fx/0. uHu 5[ ӤګĐIYM/L-쾵Xo=Yޖ Nl0.nwOx~øs6{[E@",9cEƘHS`;jD`6˕0Jx&mf\T=;K {+Zjw}"<'GRV?]&t1WiBd8Ժ\kDHs49άbi$6`KIkde>KGxFAOKIdоpdX"[$s2N \'XBO)) p5n9 [n$s9p+t5s~)+sxT&G8g"/2Жq,1`fΤ"AW}"(cLq=p*Ӏ}AH4X" q"jaa*Q-8U=b{xB`A@1f5C Ju{~HkUz/rráb;11gh@((ypPkWp\0 E$]W{Z*s8yMɳK\Nϸ"#!QYJ _dlLK'mS0) (;^ފdV*hIJ'sqү1}[{d31Fz!~Ђsr;p\,d//ŷ5qR;R7yv[a4f?k,=qxҚH ^݇?: W=͋ZsKvuj9h U {]ŕV< H1+Xm5jsi1xP|xmL˯u~QcWHV@\t3 k{/yld%͈gP3g6GTu?[Zu?_cybwS,7kNvtz r=|s>R`E;dHƚ }J>ux_(z|e]w{}(tq 3xs5Ii{ڝWwp CNp| 3N4H=8 'qec#LS#jd^8,'8B'Nms/[IPɛL;@/5Tv ΅E˜^J䕝&f5$"hjMƗb|VC^PǏS bf˟!o<\2Ƙ!=|:XT7X-ĶoaubaV3WuXV-oq:x1gW! /7 aWu6 A.>b@B {V\< ʫdtQ*L,WQ 5Hg ɴ¹%[-VB+ڞm Q*$GKRJ#0QA}^x}Pzӽ^We=Zc<}*H "; 0M)$А:ze*sL4)3)N\b!89RNaVB1EhAr2e*EN"bg@KAPLj_O=ujot*/YxV$J4u2 Tn)RwkcNLVP' g0 O 9 'jdYk6/yIPj_ ӌ)b5'1 t>\*G/@k0 +|,W\ߠL؛"[T[.߾Z>a1ooX,`f8U;veZ|R~:Ʈrp+WѬy)u+J_WOJFOڹ̆\*Ju+l$$N>nBb":ch Zjh->n]H7.)2u=+Ͽ]*:߅/{˞lG&j궣n̽=pys=aˀlnesOlsq惿Cn@.v_Ov~sS~Oqjz}q|_?.1 A$,ldG;!J!Bs^X1V[eeU>+FyI߂-q[KT|_]?ҧ< ~`_*Z؉t&},D6Ģ`I|1$:<_7XwXl$F5D&1*]~ Z>xw`:̳dŤ(ֽaRLb & \6w]C)-#"(kmXbzrf }%H }48b{HeFÙjpd<+CKh3f[S|g}ggɢ"GqtBry`o5천,7V5\D!S\fmDNŝ?K2fW ^f{{ն e a'7CbW:延ZYvE*fl7zbñ}/gNd|F|s_}QBncspP tͲ[œ3Ѽ_P2z祎9wh?yͱlPÀSvFׯD´΃}6 QFװ3ƅZW8VДKH*)/.ng3XW&o|> f2J~l&-_Q'\î|P }4k.@rm|L~y7j$<\/AhXbTSQ0Ҫgoy1_~2箂dN!Y("3yC3,")&w?)&˷ ZF1b:9#!V4uj=:8/׫EdIx2I'U҆s^񕄬>{KKm[8 =R5˝#?G{b_% Bܲٚv)Icsg7ړFH QLǍP/ʕ NP/Tk%,$7})%6u*zD F[8l"GW&8F*50 \(]Póez1-ڻD?G%Ȕ@ZGF1h11"1+ҮxeĘNz@y@ n6 bFi<6v7OM: Fk2e]o3/>5>TpW<-wo?xz=y'OϏwYw8(iox"U3F kwbr_')?#?Lt<_ﯯ4ucKXO}@{pkH|ȿ5$HKtuͰ5N1VcSp Jr p8 ځG\;fGj8>S #XرZjr L$1ZgI:!`9F.Ul&({F3Puz2s3بT˸GP BP7%?ݫl,٧6%pA-OH#{hd.Y57hyz`H Z9A^ppABcb?Ũo'=6KUK1 Ml1p?.V,Z- 8 d)F{s`GLJ=@^ctk[b)Z!ŽM EE <zU)GU< ^1`@`,KpUDN yHjyEpEѐ G 9S6hӶ,lmIBGR]ɠU6]4w }iS,;s [* tel9՛ely2v̑F3C l8)g"Lת8f)wr- ]uxS胋Nj5 ŸT?09k$S"wAQWQw|OW-x? ]^`ڬU EA^" zQTFdA2M_Lp)R4mJO:R?<|'ED' 2hPDR ޟ%i2 5QHGDD=0ꕎL+ B֠F\*颽`P^ шkE"e~iDVEi\.(ILH?\b|LG$k{ŵMBcãTSA-j U3_5 :rbCP.#;Hj͆1RnerB%ZMu:ibG Ag'12SJ~dT¾KV4p_r&D۝P\H"M- vŕ W"5# ,4kޮVXUhKq?)aӢePѽ;e-) Z!MBoƖVaDHs%IOX:$Dɨ亿Ͼac֡R"%?4xV'p"$RA(+#YfxXy}_S#XH4̶)3SChN%RۊK+pED1e< yÃ$n7e񨵲נȈ i9K4/%<:\l?w3O0 "hO0>ܾanp89:m1;11}]nN%d0!I\Be[ețf,TKvwIxrg$)w~!{oV)(. ;CMVFKK_7ZxoHgQhF`wbi9/ BhR7 }IGoܻY0oQ&B#K5Y=\:Nw`R4Ņߤc.]]}1wg=[<,%jeU-]4yiϓQ}i9wa z|EKϹCe)RնoSxrzm"aa)ҒbQTp5gIO&KE3xg\ck*v(8HɊ%.j6-F0GɽVIe"`ecl`Fc.XժOd6CGvЂ*g(&VgNƨbg :x(j"pNѠ{EUZV拺 ZPV#bGZ %6.FeHEh 7 B i+4 _kh~G'*0_AEOuV5:nj[m0: j1SB79:Ę^i<h8DPP%$p0:ʁ*rR <ޗ FnzֿL4%|Wc >T!W܁.!ߺ HI_>OIi}www=Ϸp "BjRP诿;tZ3]:431O ,O\76`bۣ?`"A"NjfCMq\sD%\%L7[^(xI^^Vnxl`}1U|>914d㏕"hY'LLf@bq8o* I 4p  2&##uI扷A8 _c]KkqT)FKFy>zh jeACl!E6eA|0 6?|EjY:`'sJSI\WLzUl}n){0U0)Jās Eotʀ#bXrǵB,BrCͯؤ+FJg@'4lj.J,)`0G#`:Q0݃`c6|zlyH-r2VA>r9r/ <(r*$GcDYB9S H# hkaQQ5,;X'=u+V8i_2'TkfmdCb.JQܲO@5=C(Rc4X,$B/kD%^lM Ng;/]Zp%8_~8~IfR߅x:㉋/>ݿ\Ǜ" *MG?'1Շ2@Le@hTK_3cvאRRrm$XrLzAȸG eEBLiqV͓n8M@Cg.;&'aZ T+>.tfO *z0D:?VyWgF̏'Xv A=6_WsqT _1X " [jikfzzit}3_?Ū?Sh-y@Y%/=v֩|ijs8D¯N>/lrp|\x>Vd_3k LËh6-7 G4PB.VFշ\bT:(ڻF Uהzn5$0* .ͺk9DX4|{D۟tW+8T=^|D9MȘ,~xW>}GM'ݞ PQ"ζ:\efv" u7#a6bwESۑ)a$2RW)H`gK"uUW1^/QMZ+XHm$o; g[A J \)D ̒2,AYD`?zS8( Hӌ3 D ,x) Ź)1LyTKk8G np-!W(F7}w_]d)XX\CBd"j)u\au&SJQ/ {PJ a?1v4[4%KsI8ͫ!ux`ò[Yߟ?ǟ 싻8{FXD#'4H"}2ѐZa4B&*&Wu±SшǏxDJd {2JEwtU>0 /$h >'U42?UC5GRЇ[9Rr7uːC/]Em,9Y`ܼ , Ϊ)ŃBDH-./ 'OuńcIPJzݬjRsB僻Y'"b@ďlJ&s̘ton]>i?~$QhmsmNjdc.E -4./O \^RS2)QRX=>l).cmF9nv;iء˰/} \c ./2pgy͟|Q蛔N鯧L]i҂iUEˀWr7rtc1_+2_w{pvju\s&5>S~k4/otBsZJFc7/Vd]p`eSBCbͥZ w7BgJjAI֡8T\4yy?W0V ao hR)pV7DyQDQ$/rO)%%\BSYM (0!gѝsgU|9*" n<4}=.nC؜,vxs.(;z|q&DI-4z¼W:'JP@[ahڲtJB襣 q ~ q%*V$ +(nN1BzvN,I Í[d *6ȍPʩ\ V2mn2˩`'Lrb ̀RV>W:^9C18֦l.roZa+c#zv3^U"jl2G@ƒ 1bԊ2=j"c(B"xy F2 K`#h*Za Jnykt\dNT $f&2˥0J[0A&:΅t N `lb~rJ(P^r B+2=*Mן"Q0r1 uPJ 1hsj_1DVyY!SʚxR2He avV|3D49 x JLׇKDB*p~R+Ͱd\ХP7JBEԇ/K&KaѕҰ,8[j%vt}1['Yfej!)DV| 0i k>G"\yD ,|(udF`hx5#0JJZVH_" ;$1bK5 nſ~t3 Zʳ3wJa UuKbIj4qW2 9^3A7qۜfwۋ{ ~vwObu7?Gd%5W)8.e7/ ;fʖ{錮lsO?{qyfګS$_NJ*W;FǏ) xJ^ ~ ].P!7NƥM=Q6'PCkKAx;v-1aӞzp(+XAwe#5cLtW 헣%T[N !ph@m]%>+qlI[.!oQ=0>ﰧiMږW>@Ѷ6KI,wNdqa-Γ 7;DE)GG!zQk\Z@Kɫ2/_⫟ n\MVk^)+#ֽO`\1rM.HSj@rw-cb$ajc-ghV^yșZy"wb>;$5U3?()Qww<ʱJd䷌=]J젗xޠKy{|C_+`qwoktMzvm􀲒fM\;-gHdCg ոoNw5S$j)ChD' P@M9Aߝ槌J ^2!}0,R ),(%+ΕP!'94w򪦶W ;Ġ I@zN4 ,h gl c$Gl*PfvᑗB1t C[eZ~󜭢+~9Z(GքDQ)PK ( p;[Tɩ*I%AS;Rk }3X{ V5𘛻b}[ȒR >RX+KbäVJ*/ D*J"EM4p\Xi&"v ]Z`yz. H^8ӠlX[\-sPPmzaV+g8>*J% [㆘-bĈc"0z?C2 btǺN53+bH n*KX~^•H)3G/ʅ&z9Xk:P@J/,N:pDQqicIr +]4 Uљ0k~0o4bm [X b<-rO nXh[Z ÔT+-l^\O݃o@5zJՎ#-.O\JۗQو cj%V)e`XP?ם%4( ~몓Xď䏗vJh}{\ |JpFPeO<<쩘Jɩ,$~$iSaJvv>S-)-٠VY{+2kJe'vDaJ)#*bjR>Y7OkW%{6#W;C'C~ --ң?`nxƨ!YyrjO~7.EMƹ2 psזscƔ`O\)!Lj4'u/k^1C+xxOeuLƵmwc]B2iu2IʠnZԗ7>\hbdz1*ڴ\[G@[Jʁj qa. Ґ 3o ucjTY}X/k|ASRQ͹MYk9<a W~ݮsKDP#놸׷a U |9I Wz̳?pղWn\jJ1jT;kV~ᕸ/*(R t6mNז$|/+:VmE A\ ߜ gJUсC, C,@'.?r*0<5Pzbt"0RDX-#O'@K6[px&T o~#?ۻEϳfbx,Kk`  ??& ]q+](VrW|:\hjxMŕK`NDl\uA~CQ:['#(WC/Y:^A^GSZN5mOV3C=1]]L%rZt?qDFTӽ]W(fٓ݌1]|(/~I>#Q75^G+ZUk\cԔ ]SPA_:nU?[Qv;T+ȷvW@9_k,Y \feOHB\Dcdoy=ԃ]jX BD'v&t^Tq-GvK!!g.Q2$BDBb":c4n+n{n-FTKӻMy+-щv;)ic-)a7w[&QL˒ FÌx,dL湱|})+E½\vT<"; e"A| T(!cYA);3\iZm^@z?\Qi4ĿmxG }jOJa]mH3Cz;!9W-“?0ư(Ư*߽v w}t]E:Bo@bՙ.~#m Vn-A 7UFb&ϡ*N껻Nc/Qw'b bBxjY~w@V )/DjpDwZ q $TrɈxȢ\c/]x'}tyr19[/:Pr?\5Ubxu^'X:6|QOD:uu$֗@#Lǂi%%2]eLh!"c06>s5zɅ\UU]c-b[kN0 Z|)ۋ-cS5/T^k˧GgW̞'Sb-?kM&P/BTޞBf듌u-X2 G (lXu[ۛy[@0)2'OKwG\OŗjroC_9ETOj9"Sz˒h^[k5YTh:_jRMTomǼA 1d%Wl}wDZa2Vak;Ա%n\Jl<^Y HșhL19ʎv=v DtbhNzjn n)$ELq% NӈR?4!+oH_&tn}z6{'2a@3%UJZfNw-"c!\i%8+HZ}o F[兜uoKk ~*Ujn7A80ưK7rhscͥD{и>K6 {3'a[e(N>92?%/s.Đ^+p%C)'t覀.֊#j`߱W,Pv){lLEEKM_~O-&.-_H*ꥻ=@[>X"ŕӗ{Yn޺U{Vjaؘ뫊Qy^Z^(-*q.H8% pKImp؁J^hmGNMMec2B SPRٜi,ϳ27$d c-7EN "ϘDK7VY*f;HLbIh&X$'!>/J2F\Q<&J)Ĥ)jܟxKCB$=i]VMp jB0VCp OLu!~6\lr)Bj0񒐱"T3s@5 _PB tx:r$Q kѽQp2ԖAةy)=]5*9JՅr$Vxáלj͇v}f6HD u d`WTul:is[9ʍZbI)'49C (]w:{/1W+?pڅۄ7t՗K32#\BsN8&+W0+yPO "b؏F?ŨKؿ{L`A\x\?/Tߙd7Rߙ.w `QĄ\{p7L!Bu8Ƿ/]_%d㖨Z=}4ʋ}uٗ[g_n}G(1Qfq-cq52-t%0152=EQbF*S]ɉ@:ťXQ3ܴ!.QŠ8+C-T&yi\2]e"Vnr͘*w,F(9TFb a@` l0 k6m g1Ք5qKƊ)SOJ$^;4s̍”gAs?XT Xo3>mЈLd.+KeAJ$p6\`= (k7AӔf,,3 wfy )Mz[UJ)1vVd*Sj*"2i ;+D Ϫ>ܾvQkҫ`տT+0_%S f^k X SzA%/[)%<)P{h 0=is 3Ķ(4hԥ=F"]E<qE |$*2_vS-׶#A.)QĈ>D/w#(L10\"9E l}VH5s }T dDqu?mRj KV:n1UB)Bl奫޲_{=[ @~sd\_mbtWo4UoIP&5h1 $.J*/]6/]C\r:h;/εi5״| {a~si9nLͯbE2kޙ5 7SǼ0=y'aAD;i79l\DO=qpx8:߽v L+=lz:CqmwgIОRo&4: gdT dU ,)iJw2 v-QT1Kn@n(k.=՚iQm *%ĘR2*eAV2eeYj)L+)xLE{%ONhрL"zqwDa::a>QMb'J31!A{K(:`F3衲 vE) Ď\r kgīٺiwUS7ЬENU|_̟'- !&}atIDXkN9_MG.Z ^wI)DuUj`a)'0,y[s}$4SЄr+drmj{oS맗a9ԚD8d]s?{gǘTO4-TH C~73sonIE9mr4rDʼn=,uyjU%ܻ*݋:"T5`+cbɀH1_|~xO_3ԫ( ů꧋ѧoB گz_r)1fm}@krM":݋&j$e(!")-;fAxck/ʢ=QİebáQ5,9* nZ ȮnP+Va}2-qFn $', WOs4cx>3DBA}T(}[p1s ˩(C e#x{T63TyNa cp . /VRxC.kxP[E9-nDbX6eU+wfVt٧[he@N 3/w†5؛jsWzP<@k=>|gX|~IadEtɁV+z$ M1|^E;j.LfT>dq)?BO_})HciXdpb7Oc;C-nIfS3xc.>ŗb)i,Ux=sSŒ<'ɬY.F}Q$ m2*v̟nvɕ>'X~2G؇yz{mJri Hc2[T3"egTw`\=ُfg 19wɡZa`Y],0ł*b"٢{%[BE|_~M$ 1AgTZ_T@z+C6N=A5܃`g3/mlFNLTEL mk3n\$`R-q~[AΡn*$D/& 3v|*9hrie) 3!Eő11%0VgbP 3)J,YqQꏘt }||+̠ PzrRr)xIs!l+ ſ}`ۺU %/7ձk0 zy5пR Lqeխ2+YUW(3Kb)+ g^1URMQ:u23/rs;!I5]p֊d0$^Y` NUIn_YY=7+7{M؊9~݋ÿߵlկ~[nłEkf_C|fO ;‡whng添>=N.G雏~ vYjgv.5mg*xw&{ֆf >$5.2KY2a@5S"^NOy@={ޑ2ӭn(7⽯Hs)" \;<)brrV Tu!u}7o=E IMnL]G-rvҴ>v=$@5Īr|{Hd ꗉ͡XٸY#Fr @u=4st(Sb rQNt#NH15Y@&թJv cvǰ4fVw:8o-?D&K}D.nB}b&Ohɤ$1p,[!E3#>zmv}(4_j&_}{[Bzq-P2h/ u%;E{&+]-H9]{Ac2 ?)yq۶{|U={i^<7TԈ>BޮR1._VIA;nqzL 4+x@N{I{Վû+Sζ͇w`r t:ۢ}H %>|e5?J V}|n`wNes.!a ρNi!.5ɈzzHӉPxN yXW'x!!Fxԋ¯4.%ԫq JLE|{"l[GW`#"hToN6M;LIꃴIAZ"&aQ!j%0%zx0$lV.JQ~yO(V4-s;zCə%?T*aA#/|SXx@ wnY$VnC;꽺0B!z0ؚH".Ѣ"s}9f! אwScePpn2'Ρ -Fu^teVйc$tG4-^X0LNhNF,P M: 'C$UҴh繧P|ɘ&L٭lt!]Jc\4'BK%AY'ϳF|3F,<^AVghf8To6| 욟dž3Ƿ0.Fm_{Wi=;9gucIs4FMMDvKn\n՚=994Oqh<) PZ((/I4JeF@> +tEXamZrYۙ%rSJf.1 IVu]Lˡ2s.Ɋ nV izzNETK6ugrv5q T,5{acCV\q~\PAG4qZ^@1IL)Z:NFu*Uv`Vq9SZQݹ&#@,'<6&@!꾼XJOmɖ\iv5^lu߿d4u{w?ݘ^|㞕|W6ۤ==V Lp TLGujD(NN$ B)& |/o.(Sͅ7+~X)o&V|0ٷ{;Dnh/u?v8{.:M  $DWQfrdwDbG#5hD]1Q@ۉjU(ʲ$0e&dFٟjʋ"!/52h'z|kFZqI% j"ܢTPչdTV& FUFx'mtR1Ѐ}ѝ/ |; 2Ν"$I19h&;_ű}c9<q3&&.! Q};b%ClsoDw옜 Hv}Qxm\ՃwŞ=ȾBiZ^F7-'tr)Q(-|1~g#lca$FLm!sLj!]dáBomv]Qa?hWTثq_QˇWՖQfjmut%vqJ/Qc_G>]v4"OJɈI _)uf<&!g( |i#p0 \0) GՂØaķ1Q%qJ/XgX,k{Zz59hN7wK@A8^ey~Es:$&J1.'_BOZ*Z_jhv?o*nkn;cD_2T+U)`ryY.4Bڔ%/(CysF1,C#]5]dvPu&y5f*M &(sʪﶽ0M_퉯^2rH !L+ Sz3 P όDv͈ UF g.kԄۭ @.>3fb߽ޡE3fS<޼$SKm{NsպUyzrՓՓ)mAn 8㪹]AaD"ƎgNzt6t(r/R`Jʋv]%ZD}Ģ>h]p5?&]9Ν٣N`=ݵNha(ĠOdb'qgDra oS),&,{ONCa[>|Jgxq/ON[#'öZe?j}{zȽ~~h#+,䃗 ińj]4COY۷d?Tݚ% QlޘF ?.«q_OE]t|E$ӬecOQ^49N)Q݈?)YTRYb]l갳qt=vJ1Fd["I(8?]{H!jB@'R9{fy)㫦KSjrd%9_3꼊߶n1fv9L>.Oo>vGUb\`8̰@pxT\^Iqf0.3',Sp< `k>$łL'* p̤5qg|tc5FsOfKX>LIUZ~ҧbXM:g$\gogMN8N6op5RO'$^#ޞw 94GrFv;Ɖ&(X)b/:mDzS_LUc̠ιsL2.6.JP"q#6rqK5CfLMԄxtWwIP Rp,ZrŖL gud cNyci є&aZ׀J= P'p!qP(|넅skEA]8F(B4  ]%S"uދq)ؖ:U/8`'0S B*lv1NTaPR*j̈́W69ð6^!x;z':G{m)2b0de2̥"Tc#ù8F&1nt(̴͚Zz|[7RWZe*iSENH~L;[@gS^B+ C^Ng8&Y;_`q3ZP`%QGR #^/e$l+L<(f`| xcDAYx<N(R!f]|1CYHӫB{0SqRckC &F1Eblc`V:S/Bc'虁eK%wኳ}_N+ ZD6). z4ݫ @Yų’']"3 f߬,ogqme)yoSÛ?o~~L;'JX˼>&GHBr}EK1!2є[HP$Tjqas\Thjb^(D)\F%GY(g[xx!m L"8SDr2,cqV-V-]oST6kuݔ*=2O& AI^2{p>жO&@؆&au?ٷS ?|z,Go1j7 53]/w#?M?F(*3VݺPs$|=[W*89xBkl--~/qw3 kȞZ|2r'y%NG;5%6ן4YXغƙM!թʚ=lM:nLс[{ B'l^gfxT^Rf|Fǯs#ERZ@1 l6RoZ DH3KY%>\8f0k(⸚HxPb{w@^B o7)v7DŽy6'+;q0؟J2H_ۋw:yPw}e.HCQM]WT%@>,V?.qkUFw`ZКPG1%n1sCo(6n8{>ш͑9#pW.e][إgC׶EsC]/_p"h@Ӛ>~0w)@Ϲ8FMMov[/ke0~.KMB*?V>Y#.R܇<,]3/}U@%Su.Dߋ K"eˠgs'INv%8>:9׶]:ǙSǑ_kۑ\L9Ӫ22+\r;DY K?lTx)]^9ތRcQtj߭}vTfYS,HsPͦYwA8j?sEzzDʗB:Y"Ũ7<9x~I\4GoIH=xݙ!ɢyп} <B~{?~ XP/5)AoQ<BXS<8ln 6l_`'QT½:qB),Fh+/07*jILBGHhG4 '+E[TT\"P4zNpM]mOgEH$ECp":I9h3V(,@72N+1g |Sr5*A$3˥At Wz,B9i͸Qak` yfyd+؛2:'muv$ZbM<"6$4ŘpRI0[E^gRJPRO8S =jE3uKlf:>f6^_J* 3= xmp;ogjM(X׋TE{n 9 i+"n^ȼa~K0SS0J!ek;ت?2Ζf>/Uz]ۥ&`MN5O]S/ e`cJYs#[a* rMyhtG"\FfǛ߼UdG+.,ZF_b=D*"BT: ,,`& OUr֗齇 {!cYGxGIly&1iYl-޵N_s Ig͜P `thB #A`4ޤCh!RP j謘~%+CZ1D @o~0EřTW"1#q++#f0Qdb8D ݴbfAs%IAA|xVhM[(.%PcV]qȿ8{MZ[9~3Z\/'{T^""-*neb7𳏺 9oI.QC<*ŠB<か{^ҕwq!nS!*-Չ=H#O]yz0m)q幽]$QTzR4MbrtMӖI!S"ʼnSa|0'h>o0$&%VSNӌ!?`ބ)h JJ b>R>aWZiA$'oSVrAv(4k)|4g:՗\SI7ZX4G; au`su<P {;82Rjصu.pw[ 2ˈ\n,#Z ĵG~t3 h3LFŰNs1l \y['o&7W_5h*z>ߍ]0},|..{C&.1.{o{ruom/f0wϨ$\7N*8kG#ՒG1fe-޾ˍ| H>he0J[l ,ĖpzKqt؆Yy/R^PD6,'17o3oQ+Za48 j`{pԼ-UT@ PD7뺘d*ܙy<ߦ4aQ+[]_  _NQ3ճݯD1m8 p6{ ;" Sx?h'`Lӱ{g8 &kϻ+Ud2JPԲKoRgsbӬk>7o]釯CԘK)T6s^$!8Ji2F]*)dž8fޒ{B^R1d}U U=GP3y*d+21L &%g^) |>H k"l$sb@Gb'j$ԍ)O 2ѽs2R+vb[dpJFXc%t1 ՟D,twVD? t,אk.=WwA@U"eWy:_2kpn+pr.:L g&WbI{L{vξ}D\@}e6r6'iea",~055*|RTh~3ݠj(u_aiuMܸTȲU`qpU8W0`uk pAasF8@qF'&{"D RP9 %>ëA5'\?#y|t7|ȫwX; _gU>>c,7AL:YxG!7,N9ۡC:M ݽZ R7Oyv:5P489b+N+Nˢ8_QegQFwd4EQ@ќ}ZA>mOxz ;yy+#0ɫ+dHS?fnH)R ! T|}9pRau08 mim xsZE hFtNRUǨ쇐}ءS1{;]smПOgWG^ti{zB+nSG)n:]Lg}rwۤk'@Yۏ ;I,P7O66xtt?|YM|A z#|>xLbvUjY!*ҽ 2IpV|-YM{|7LxG n-;tCU{:{5u5±Ž'8/X ~Y,^]_SFWZ}}dqիA`#~m/p4yrP*g8-n23{%Oͼ_N/kҎk7n{PC(MOV9saĚ3W]fy:ӸOR\n':PIcvs(̢-qtȷVeV_@媘/˴e_Ѭeƍe|w$гn'ͯ'N)56~;'9;_hë8OOw/0];uD!jpf1 &̦+>Ǫt"֥.אŜxt(ov2" ?ogJ‰jĉ$}e:@us|T):0^)%{֙#‰!v>\KΚ騔#M[XDw^PSEÈ JB$.^iêBD5r˯N%Ԇsc&uU m `-8>tSf6)˨>dbhFWa0Λ&Ի-mH-&<XRD\),(`板!'^NO2OO*HIӧ'y!#7RoWLIkI3ǟb,8R >U+u_d =(9?Veԣ*x0 Je4v2x2,r ck*Ѽg#HzK%ZL6_'d8'mRuxf˳GrWړeSɒmA[ u_3޻DKlf $ A}P[8VkIY3yGyb{"jq8dWӤ)umVYH2+Zw{]"TEAq().WBh`ΰ[ȹ;~2qF9*#PE74 K:Ky䍿hrK=pt5T>zk;ڷuη ^gK"/ĝ, NvsNs<njs/+hs|gihunfQ]=jn:3d];|Qh钆q05LF9}ϗؤkպTιlrFggq6{}5tA'[࿦O~АZ$%l_jc lvӑu߱oSw ^Ffni%7#,>"k[nætq]:Fr!HkoRߜtOWPrf[ϊ;_V2yf:[&kpnV܊F*rTRu88գꜹG9N2RR'g:g1b:.evO#:ӫuľ'U2i} 3QjokRѦ;ZG!3ڰH!HATF?\ GY5́%vrեMdO16}O| _?{7@m%@3ys fvPen;vNrRnE6͗ 0bEJ\c8 Juur_w.~׷_zKĿi f:7wMmUN+sYkm'[w/BjG% \/Wj#>w,N~Λr9?_\_FJpcS#Dc:qh<[(%QV OlRܧlW]#0euu G&JDs uN"K1#h/gg3gtSxR?)eb;#%C KNZǬͅ6(9P)5)‹"`EJF|/)E[PU+sAlUBUx$'CuJ%}ʫi .ar:#H =UJP9/y坦5ڃq5RTB-f%^t5=>UhVk=|8eT{:9C@@QDzݝ 3] J3iQ*hmԦXjǴUqqSsx v26n{v:w6ˌds)Oj&Xzw: TGAԴV[xr JGA4|B0B{}g (+ S1xC!/q )Ź{#E0)P ! Pop6r@\7*ÅԮ.+(NHIH%0- !3JtTS.GF @-zftt=MW/½|]sM"mWZg+!sɴ))cE.z+d )( U޿ϋ{\y<@*_wԇYGdn.}vu:vLI"{dS XBy 3И(GTDPѮ yUr@ EgJX77 w[Q`ҵUE{5??i#1rɹ+smT'scX#y(n)QE2ٸJAjOLgH4#G>7u/$Ǧ'**Uw32BTg{E)WBMk>=}F\)wnxTj!%CCIFs';#ji_<"j;sԈM\.~t~rh=~j=N:E8 Ȧ{6phFu#%_jDI 狕 Z 7/=:&+^dEqu:;x/~Sí'j}8OО5VE nnѯ0W'wzZ|8; gl=+‡'+{sD ]8zjw1]Y~]26r6>{QZf|<} )j67䥦iUfrL[d`*K7#<3CPE`؇s(lW#љZKU\=ݚ/܉qałuUuybtu7?!RUYd՛&Na[;)RJ{vcI,`B#MrJu 'Ń&{ࠇă7~J<Gޅ7`Ӄx 0,|3cr,6`)<7 8H4KaePR` -8QdcX.v%u} !aJz\MdSZ)|zYKNϝ+i m oݕ?9o)ǀv|i#]iTG(.nlh+X<ѩ߈)n> p?ͻ7nI6H^ķkѻbb:߈n#.fR̻']z.,䍛hM5JMuŹ4)xT BL'1m(Bhy<ޭ y&cSs(fUz}Y9Q/Tswsr}{sQ7:6L1g%Pr<\,) pLRZ8REtduW(lHҨR׿D@|r|@*֓h_'n{EP/@ Ψ2=iƑ(" u(UI{i+tǐ/<+чI)hrLQX1 /z<>}*<)'oNG-jH/}|r#AOMYRITt2~g+)BuoC3y"$0 8!A:l/A8N2Cjdpʝ>!Ȭ#prIo1$T6,WRZmxN̝%-s=Փ_;eyA=w_)b_=c')褥iI5 gSPpJ^bDy^~"PV`zl`05l*H8K9RJ›Zyx ʹ`0;;䎷;84yNs3R%ug˷@K*ύtW: >ŵES: > 2+YIL NL:yO{X(O(B+ yn?9T ^F Okq)8i.H^ څ>)Cp:9cZzZyJb{S3ꕪYhǯo#8 (hQh><^h(x\yhy= (%` y}@ϥ4q܌ ¨N2#V1asfXGf]Kkko·r 3'R!Z8()ʞz(>K; ՍN5' wATF8ǘ"BL-~ 2ɴt,1U^]aps2G6K܊h/l\FN{AYQHԊB9ѭfŴ=ƧV+FM D11k9}kt5D7nI6a3w3ѻbb:߈n㭾*Np-oz.,䍛hM &BnHwK tRFzRnޭ y&zM)tW(Q=l)7@Y*gT)αӁCGVpN c>& Zt@F*e%p[z(5BQ=&Q'N @{nIT4JP5to_{(",P * e!L.BEa*ElK%OM{0@3((}0Z"2WD -CUP QCUY?ywa#xI`,VD ZfD1341Kn ]W:0o( JHNEsOf!ZZ`r5 ~զZSup.[C1^L¥{0(gD"0"@*O=Af[S*1#茢_j2JJjxTS.s!P7`Xն/HmuՉ|r*^]{RU3P^dEqu?u0*n*{lZ֓s?r9y6bƻ[wr}[}i͇3gl-FS`\pvo> .dWHGx%Z>{VZ!{M&5N2ˌf"O\i|}cs˻:̜AkPqbHԵ> p=Th@Qa8{S6RS=JSOh*Y3 %pA>,1Cj~dFˬ/ n@ &0(UrIb804/:a~ʛVwK y&eѽʳ#:[?of wsy=>\eǿY!\8_)*{r%u Xr X>h,8ԙP6S а7R`Jt)Bv8p%0y?8B\5F5"6}LE8*ɇS9ޒ`O=7#F 0ñؓL)qǮ0{sQE7# LJzG{5Qtã[%jtB*!ީ߈0Wb#ٵK+K]X7$ӟ1+VN7bxS=@օqM)ڈ@nIwK tRF"Pz-JVwB޸mS(_OlOi؏ ~6 o>wt֏<҇8m{|=4,[q.)C.?ȗˋyhviO~fXYDFfįP WqU?˸?z84b݌*bi:>w=4Oq ' \KG3WNF|o0QOPjUNZǬͅ6(9H!@JD$CxQ(] nfH$;{;J4 7['EcXl%qZa+~fyaVq&Y&;<ՃO{=Pl3ݗ+0n޶7__+n?ٺxi4.VI)ƄVm/Wjs~kYߝwD|qUԅH%%]9ʅ0Ivt^ݐȺ3溹”Bn\?;q8ŽhalLW-\>Zv/U'p߅H6ŒAK=bɠc9NGI#uu|x7Ya!tR)Xٻ6#./4QR.;?SyA@~3 P\1r$Y_ӿn֙gGY^a;0&7Sa:@˔[DRgw*)_ڞֳU}ҭIꁝW|Dzӂ9>ݦΑ6_)B} FϦWf11S_ygɇdI)߻~B;WSuK{ !E0ôDB4LNYSOo)+8*oPZe} p2pR1GT^o䐳=ټqv)p~4-0Y9BSrۇ,(4ɱ`PG,wjvyJpO=o>yOj;DϪZB WPe{,>]$vމ}F2:gU)AagФ$ h򳘒2JU){y秦嬰gC d+s% RLlVlI 82,AsnU&&DRd7*HP` ,%[V!e$r)f#4fqo3c Q]H. iTgiJ @?dPoqREz(UraEOX3 <&ѹ)8Jדxz')f"w(Ue$nȘ EJEݹ( c!sj&Hojv`b&lӆ^@lӊl6ho~fM j_\M^؟?gwO Lf<'RU@bt*$ z|\} 7ҧ9$,|:U1\E&w$?F @e3e֭ dq;OE4ex&n2̣M-p7bqW__?8xM$6lz?]z/?2~!ZrTQe/S}f^'U{{jcQ)L@n>X]Li*t>{\Q`j qw@U c:mmթ ]:*bxpNkFDg9BI{qh^#gJ @=Ep-`E8p/^ A he Q!S w9ϴq 9 6R ϜLl 1H̐$U48F0"sn9704„i12xkLc _ ".c%Gn!6 !$TT !2J8Q i.Hގ\~^$a$2$G !R#T E1N6Hh1(Sr9UGK~9no64)Ԫl2 _cG3vؙ{:ܩ.*=nԐ|o~H1li߁C"La,)j6},6ye4ӬJJz*F〈~ޝxw+qsRVg"`+p"Bew:笰L1@)}y׃q 1[GقD8WvO6%`V9![#uxA-MV`ǏPTc,E#7ӊjEf˨#"K1ҎĈϲQwBFOǵDckТ2FŰZ^I|ؕL#qE kMTوyΦ:p%1~s̜z;Z wуA}4T]hNA?xn.4r ` "Vy{dpM2;04, V/ݡǙH1啎qȪX ;2`́=+qWRf^"tb;@2XEN42&eO3B9B)i7"P]<݂:@>3=?YTf~LU,W^0ZkMeܻbݟH,nWeQMO]!~xs_Xs{-\_~Z.g_$ƻJzJU).,lȍˏD*>-R{:UƛíM'!ˇ ۚW$)\ @{JdOYZW2T6@mbZqdka+/Qk\B5]t!S Fθfk&\L?jcmw7+.~+,}rr H9΁?n*!uH[GAD}F'ƥ\ a%j p%qydsuQJPXeſGr׼iv^M7_&izKp3b>ǸN4q/Hͣ/ůl>g Kl|k q G[]+duGBqVT;k”>7,`OSղ.vi"J;f@˥E8?oZg;oFTbBr$oo{{ R1TP7n96˩(~a]HV1-#tA`14JI*CX-ya2gos-xauΠJEqTҡP5XiTPy)J)q陶ThY7T*xN\Sr0g3Jk^GY*o<{"\=| %OH!feEm#V$'H2V(ř7BZ"H4Žр@1B EWB$ FE0r +F!=PI1c.zTmqNr$μKTN~ݓT1!!kf:9\*E^3pc[`OXNEh<14EǃBF1OИ0dϦϝQ,xskѾsD+DO6w R=:Ug5WUtqfyZ}z=R&/ZpE-z"vv>Q%Mo;gep'wetK #MAg۫&)`{=J@@B}Au L58CTsyǨʟy,yTUB稥? *GܐO0&S#.t?Mj_o/5y9\&sLpi!Ayu(QJ`$FSFc4Xя|kף! @^VF G˻yE[jo7_~G .uׂrG2ru5G$Sd,%ˇ%WT؃hJHgQH`,pa"!h/7=~4rIb~M5Ae/XJ%)ڳvy{EAqv7K{L`c*;IyYH;5i``kow] ȟ_(r&ɀnÓ=Òӡ7|tP5!/X>(j5~#]wFtT;:(_iCi0:aXKhG; 0YuʀW:Dhӛb_~'.im'TPh:sׅ97 gMf렎:w)VDeftkUOPD RViԥю GL Jhù~?ngv`2(MPV .jEj X\Q&6!#8{TB49BXѠẐţH#C::ZJ"1t3 G3={O)d)tw|ʅx_\T*.9G<t|Q'B>5 }3R }A519>G RPH|ZYG5s^ҭ- toϗɞ/=_&{lsD }("Q(uPy"% B&d~Q'S/?f\Out5BS*֬䊼?U"VQT0YViqQ"0 AjF.\3핪z WM'qB@ |l-׻ezw].:=7f֓n֟U\ejo~a&QA>~-y\`VD#EȗijM\\D YGzo_F˵+n:6gRt$=Wco`b4ՋרEWL1#ſ*r(۵k%v !J\O$׫o][L-EXu0HFrM}7f~( yrC"2Ƀz+DžV q#uEX$*+AhcsrV3 2gA=LO4ƜZ9/GiAP@9OЗYrL8][sn+>+n*O ߒJUkdYG%N!eg=^W3 .w]9R}kH61^]HZJ1Yn@q jq% P0:A`@K*A]PmeT)e+/AQhqjU_MLBǘؚ34^d8րQ)2SgUMڬ4Wu>G+7Ψ}BU]CƋy%W3Q JH`KѲEgR:hi*3YN ѩ,)!R-@t܍}Ia3QTT6"yR1ȱP`JzVfIrv(f%< *dz}.Iw`RDQ\ luLZck1,]l d>2@IURd}*~SJn&+i*d#$DH(5d/QV5Z5E\5 P;D{1`Q1+)ܖJau[lV͉))@&&n8,ϴ2WOj_!)֊&$QϪ$ suofpn:vŋ{vlbdc7oCR $T'3[ Pg4]dKB}ik5fK;`Ցp멹~Yה>~{@Lhe$C7L$m)I]`JjV)?c); ŏϩ'B)~I?Y?|Ih@`*F}sڠxŦB:t R-&?*ꋁ!P9ȑxso8-xF!zJNC_ _~S 2C+ ONjԪJܪ>@B"Lt PcW\UDr>,93r" q6*rXG%DC@~eX,/4 .6`jJW"/ X\cGi1?#WW ݃+>#>#w0V\9Fhåg˱vӸkY'\y9έ92t^l:>wɄkS&Npӭ`xϋ6Řx֟D=*bp-Z ',<[kpo{z| )Xgͪs*'5ȣ%Q,,S5T~x%65#/:q"^=H۠ϩM09 BrM ^g?j8q Q ,C_:t|ѩ"Sg Ȑٽ׾Ayׯnaؙo5yh&]Oq;ja=I$bpד×+/R4^NYtc~AOy\ۃ^w{PҠс 4ڻ"64;.#RyR4@x[S\fLM\R6[1[*dJqD"Ha`d^:'3u%oT40%_9I#P }YB64@TBf:x:G35xs׽q;۬7)s#κ{IכO9FWo6uq 7Nv6=tίlF%o"u\aElxuKGjͤE)w'n OOoAZHCm FEO&xkOz,XSv-( !^6%?7x Dh -24+O/ ,>Yy!1UPC!!jqTJ{pnsVDCv&޽/?>HINj\&܇% ڈ-R[s0Z)ᵭTȲ5/ ❀?]{pnNY98gg9[0e򅕇lA2EG/]|4U{;{r{5uOJqTgw!ѭW)}RjOJ Iه2 (,G R~O9&g`; 'O$6Q^? C*?hX9& =m{pD[[U#M@~Iފ__KܭKG)7?(R*T:TC)b&(FòLS.` MF2HbJ7 k3C-6T\J\٩qOv0ɵR7pH*x OBmtkjr, 9=)]4bi!+A7(&Ik$Us>+Ʀu-6F iqkT*"J+c|As`Tw5++dcؠ% N8P|Nc*ğLBMFP{ݍMw"fS0V 6 tPf6/%jpp]gf6dDVdZTJ bv+1+RL1[ԿLʕ5wz C6Tkۯ՟pr$.YghZ8IBC4BPyl gB2ޞM/;-x<)`1ud{N`eR3nT>2!R*L2 ĪS1$?jB_Xz3G?n ~hcERHJNZ̒V(^,€es9gn6fĜ^ (QæԬy1JLr*GLEi-5(}QZ\?>SH6>GeY[9hrPNQrwuAJiO3VMe #©]9JZx|DDab9MQ y01RSINg^W|T?7URsѱ|̇ !b >|FO<#P1kiqQ`uZSitǥ:Nm>*(yeS(Aj4 ]g5eأQ҄0Ԋƅv@vj]A?n]`f *= Ú{g8_^b &r ;LTBJ/2J@>6d ՓH[]MޥGRvHl9h;Gc s9Wsu; @GT=y+r'[W2KKO0'ONKԗ':%CFsSB4i,gro`g0FD\E"mV(,q{qTI4m,D !ub6i1L\*$AG3BsgF+-QMd5z]d]M>d^K,JnQy̷jҜv(2ǭ6tdž m}7{Y付͓W|eS?y4Zdf/|5do/δ7B]!(SL)ecL$cޮɬQ?%pQ`hƣ4࡞P 6Al-px Mm-nĎC0 y"v d!j% 3L6S0,bb_}vj6cd*1R~F% >if> JLQFfnZz{u8)أi?M1wVp.ޟ^]D{T\(ED#b¸KP[ӯGḫ2| 1" Ci iF`4{BaH^|Y=5ʵܢ|$'Y{ewl^#<>w^im{PڠI0/W`xEU},+hrdo&qSLKnPް2>%/ 9l-07(߂gbH-f]==z(?/ < yjK5u-AQm FMEĔPׂwˀ.7 #oEtnۊhi}+beV Cڀ]Q{Ժ1tJM 5@6]A{uTt,_xoa9Vx>)]j|B>lJx+֤ު/$dc>2\`~˿ּ!.@1n}mP:}ft;,G_,)݆7c _#/[k~ Pg9ce36%r@vo. ?n~uls5Pu馃 *!/o}Òc0nWq1M- G1~qƁA$J 3dGosKdqh^^Ɛ(gRƵD6`E UuqX>!EyTC>8,5d]!s4*akA_K|e|)U'l`s6 =4ů?=f‴#w @/z B'\"T*0*H P!F7;xKZ?7\3cV^`HDUd*rc1 dtI C(o+QZ&uJ$b+k]̘+{`{@rrysezJ=7̻zFa jo&ޮ_~ t.\u*w~Ȝ҄is3ymyiR}vo0 m" K/p8KT6}:6@𦻧*#=TWX9_j6c.N~1Wvwٻyssj)7d=]O6ޫi+{eռu3EcBu}f`cU%oFڀ~υ'٧OO~cfN }tS|MbIջCS>(!V:IZ )meQ0Ջy`LhoP | H 5"]*aH6}sޓtD9V95 N]|QS-xs5fD҆vӡPy[9\!{-/+-hO)վOboԜf#1ڞʘyQ2%SrФZnlhvS1i{O.Dq)ƕ0gSy'DhnBTd*3ud|/M)m[6nP_`]1!'ɤ&yEbzv,zs$̑1Gf$7E { dԘ1;TOJ X,/&v,~y0h1~tXWBeצ\ZUOBWiU}-jJD8gJ]т*ni cH:zY}st s49CYҚR: ]'N4j=S)g)2AF)Ii ٱ:T)WlEڼ5SD٨ ) FLQRH8zׇkn [ڙb oxPhRcdnĘD59f=Jyq bR'˘(pL,쬻@t\I&PXÊUFo #'-I7K4fĝ\op1a)KŤRvy9`g88IQ"* ER"ݥ77y&}LwVCc&&Y#Ǵ%vo.# mKϮI;`V×>,F}8/!dAlcGڽ)&m]==[c VXSͥv&Ϊ:vW38W/IhwEO9BYs/}gt_$ޥR +LeOʨF)!SQX%w6q!,8I Dab{zH-WVMzI{el,<QxHCT^!c{ʷcRn77>I^-CczW+J]ݟq])PAL&K !#h RxGJt (\;K% R+)(È2i9pX)筥 .5[6[IM@TXhM_b 0HƲ(q4ٓAQƄH| 6(38nм&dJ%<)`)a n{H XouC36*$VZ+)M%qt!F!ٔzZbmL}I i2>MfWKX<ɈS>A#^`WkA߄+yp%?lXn8lkNhqǮLȕ: |]#C㡵]h~`OC{vgu}J:pWW_>Y շ2Mo0#Ti_N.͠dtws#^hϮj'7ڳ1f,Wx AƤh&F$vWl#/owB(FnKrV_#J9+nV52Sb !/R/^~27 9-\yC\PP ǩ.胅wBﷻai_<oR|酠XzNiwOݚh{G;^3cݶ#QIiuƿ 4=q-N=Q]X&l¢3\Gں() mN"pE\Pp,y.a{ֆ0bj]ݟY3U toflƏ19혮yʔC Ȅjj+1Ns0DYek^U; j萭A;t5 9r||H܄ϼ{g6~T!8 Nj q:gQtW? Upu_i_{7B>Bu>͓V$!.(h:x]y_OfQ%=Jjtc=uH8uiRK_I^և`)Wg!LpBdN!dڈpl?BO}֨IٖMIWQYvb2!kUEboz6ۤo^m;U@ 4V ԆePKet%2JwM I_44 "__][ WWW,>ҋxϫ;nq;?Zᢗ{uCRlغ<J$@(,yS{Bb=r J0wDtO]B{pVt&(3QtJYHI,ice$h0OYj%u#YGwP ckd4X]%4EMȐM72=x`x?|+0LKՊ8{Q,;Nhx q"&qy2Cp3YlAҔ1dDT q7;Bżv^R)wb^[Si`øՠ)Ɓ>t~Ftj$/2Yز~<7V#O \!춧'8 hlwUHzk[R}R"Ј383є!B)/QӝƫV6zD_OٗQف?d9` FRH6J̣V|IaKYyK?F)% #⩋G|vRx`i&c;}}*zJʸ˲ SYֻ,->w!رx&5xUy0z)ЍQ)=7#E,n`|}sŰ`&*"y5 FdoN;Bs߁uPS 0pG [~;#-`HYph+j'_?j Sv/x&A8~Ԋ(Te$Rs<{[f{Ԛ 8".#m WtݡR~6{bvoAoi[?ƿۻ7nY(ĭ]/ غ ?. \'h~PǺҫԸG%/f}VŢ/a;y]b3zqS-}`khQIC~p-ҩܹԻ [OhJ1Q>Xy_Sh֭T־:7Bsblύ3_U+.F|g*Ic[5+P:z\?y[%6WGt5UO] ϽS=ϷXZlyƲuAI뜿ow{e ݜ3`Ooy7'ҭFW1\tYZw~w}o:Y|!W3x fGk.^mVq~YFk.WfF5ZG'1nL *FωɃ=KhF&X":'\luף$9Ȧ s\>6ٕhsw?V:m(IRAP@[A?u̬j 'xvw6 ^նAаL2#(PMc+I]ڹ醨ʸYW?ަ_|}7ߺ+H8n.B€(t5ig, ~M~;fEz߶rJB~YE-؁˴*07/=f$nO?3e>^~hȞts$@QYrjjwG6;vݤSQ[V{DJal>:,Y'(![ I?BVQ8;k3ÐafML#П .T > 7qkL0=Үuz1%}\_lǗwuڐVka^|o.eE&2C ;E1T!nf;"C hov>M l)6ӜNK.4ta{:mu:f(>N`>,*`^*rږB#E6^Ce`:t6.)ܪr"㕫Fb oErg 7݌sQtn[(?b@?84+=?"BKOyYQONB5T?]葪^ʑ8U ea8IZ7|J,'U^Nk;t1B 5p]2 Jc;/Ddb‰(XyTdѐ9zU,UKj;N,cH@XMB Yg# !`Pn|p r4Yf\&S]8^yGzD}.K>Z LR' qFT7Xy)@8ht9 :eތhɩ*K1  E:Tqr׌d akF\zk!̯W5#ꐪhJND1a[I@vH1"e_jOskFDtKFƹB-N{rR %V3Rר"RhTS] yBSɴb_ɴk9yf40ZwN]o`>C.)W2 샥-4q>e 8 urߑ2 {9>><8s \'Iۀ¥ovU4zvkC>6) T.ۯUMR~[I/s0£&sk:sh c (hMh:ϮXV1*.$EE=1U"0 q_bb/:1qt4D9Ի`uDhĘm@+rmҊ%d16R 1m}AV"@ZJe+=KZ ~G Ѳ blrFr~ORDOMLh|] 6:ux"r:zW,Rc=ɠX%]CHdf $ބi4e'I.(=/Ńcjמ 80G]7|֞΃#C\Fܔ(9~DW'(UvBRŦ;d%0E%(Eio=c;/"  ZR"=)tYhnq8G`$]TJG?"-?&(w]>$~ᨧ 70= 2L4/$J;̈́%;3/4 D:ː]rs= mf2J;:/_.l9T],D7UbCOUʮ.Tfrn0  ؇.MR&7CK1GrL P""HX.ŕ6fEH#Y-o~̬zm~ yZ؋rT-Ğp&"-.9"`ۏ?N \Y\d܄&77a7o(InB~"^gŰ$T$a z!6˿j#!?)¸ԍvVMG)ȺHIQ2w d.b9F|0ሎ8 <YvtQ(tpgQ`xcF$Iy 粧*EepP4N9<!~ I`k1kt5*FJ)lƎ#UIp?P"_Wނz{;A^Xʽ]Cny~ i=W&GϞ|<&xM䛺'o)%J+4Ѻ0b7 ISb,NdKc,..Rar:ХW;)/fvG͆}ugIg&[qBH{)g{7,f,ϵAYd[̦!QwiUc42RKR%hT"8fD FڕB$~gIF9V%׬A~3;G1m+NUu&+1WA.Q/Zq"J}cL*IFԯ"4GO $%&Uo2/ 9S$ySw[Mj~êlUmw9KXGI‚]%(\^lo vx eMHlЄR-|:P:WLNysҌL{w2}Ko _xނ~{=lQ{ , S(aep}HGTOw}Y-z}|), )v;WxLe۷lg@H sRu|r^0 rqD;Лxy)U+ cɬ4L2r1u+͝)pPK5iC8A>n3 ~Ӷ2ײ$((BZ"\b C V9'%/J\cXt40¿KO-϶H(!* ZM aG4%%p $ GxߌATQp1W1{m!*Ndm%3H8 (*>[QPW)ąf6La>KhcFte<ŰǮ)a >㷔K RV+('_@_ !Hn>8¡Uw-eRR,G·Smx%9*R͆Pc,Qm3vR &Asԝd `xyqut|񟏫UD.Kެ%BQ69e=v [f.!r5!` m֏*JXsl?v+wuJ^]b c vUr}֒|p)NGRn97k}G6\14jivODY>Uky8)5-Qm]E-;=vvkC>Td&.% :$Y/yw=Opty"%Fozqy?+.7 G Qw)"gd&h-4՛*SG8k p]~G"Nä3ۺ.vNF%;O>\]a͆Dus3Ijj5ġPr|U9}r[^g\X S忓G7)?>YJ5J4LF?=ȹQ5*&eIń:('~rZ k7o.WY#hlǨ_A$\/EN]9ۤ^J3ޔ݃EcU3q/d-9MPL y\{n4ԙ5j>J ֙]{CoY4ə LwO7q1&/᭯emhvLKFp{wt;.FK )rFjW$#j/IO}<.Rc$rLX&LJi6raeS ζ8".$:H(5N4Nd`ƧXPA;j"Ѥ85DMJ?ttwR]r }7gy:aܕ|a_QiI/='rtS-"|6 Yjf"$~}YXjY|_FmsÏb-ۇUA85<p6Ga,^Q 6Cg@"h Rg"aHP(qZPwn*tzMXGPقR)=&mj$:ڌ|eӎ(G͕ u܇?ҋ9 ;_:[VOաI1q~GQHԅoo,VbM!ߜIe:=l~m_U`}EJjbe[_[1gCN9(c=67W{ P9RO$~( TOr_(9.)BLfXxQ`w^r1xLJWgEIiY -# ΨD2-j<[3R #Hjphv+"2MyuQm3)m(ϡeKt̚y^кDP ǕP$ S @ݡtR%5T`gOP+e$u!BFΖQ e%ar4FKi1{7LN {.)P E-P@+(D R0WhJq] *;t˲17<DrFz#^~To[I0>f!¼2AQbG/êX0 /II$N̉E" PTvLݬn1#t77{0hwj2\F);yu]&I7%&@TH9G\UO+1Z3,})whdm|W>뫾7z~jy7=QN=XGBYz[c(@OcK(몣˸0 {*%;dxp{$.`БNзѕrG^(J9+k@8pey:|& WZᘯXRluUc9NP@"dy#P^ PNLB 'qAV#M+,:"xihh\8%QgAꈈaԮ @:) g,,5ٻ8rWI3GX$kg&i~GH^X 9M)Mnu!b=Uе.o5H=?,릴Rkv^2q{1nŜ3p?ّXrmWFדk^ Ssn'')qm\j$v?{bs7ZNvߧ i~/ky`O4>z};~ym׋fi8,:?kja=jx3Wzɯ[HܿݴlOѱ}f;@{۟ԻN^7u`o|S@S1h2o4qjȴz&[fT!Ņ)tie}Fؿ6_n,dޤf h~0=ai7Ζd4[vm}]R%O14@b&gia:DTЄs<9&㢏R:b gēӦR̰t$3yA\mNb"1 jT,9y 8 HVz^ 0X`ZA읆2@ [;'8ȀFGRDL?Do,ԃiR(򵣍[ P1S#H v |<- W] }x.h3aSH2g~K¥ߵw5d=ٷ'r7#.ໟߌ06vN;" )&J_>S@ho4y*~3H=#r_Y׏9?>5oM +Q4nnDÔhMUj,])]X=RmC*!KA^Vԃ}RY1m/=Ķ bF|pQ_RmD X)=7meTro$$Qks{%L4#ܭeޅMrJ rm5b}3Z2&JX\gbXk}Ax!*EDFS}t`O:C$m+gxjH3T$}}x+ \q^jL-pF)kxQ-xBɣiIK@!^Ax+l"D-3xtCz{;#x:>/ CΚ=.gr^:@IX::pAVnHT=NI^`WNRWD27bc|[qK.ɢ̐ܒIsc>͵ϪE#Uvar&Yewe̞Vt)mv;e{a:ۗ5+x{ ߸@TE[!W>/5ሚ3FBBJ@D R\g[s沝9J_B</`t[=Ϯd Wj!0r+iIJShV>fqkP vrӨ:*!{ i)Hr3)48X|p^]@2l'm %e;"7&rnϹ+rLV/ Iž.^/yk$8>'{#ר4x?uWTo.CJEay}n6152K^eR9Gn3;#?R8J̸L$g5-ȢFA}gjw9ɽ.KC: ;k\Z'UŬNMC_Pt T:t`q>D`P^~-"C|@jqW_@8| HFy tlngd.=? Z2w $X{zS@֜NӼ0du$Vvw.ip؍|r܅⮐= 'ثOpC3n:M':% ϥ,t,s.CI*McA*M$H[`*tKXtߐ/ t0.v&!K5_{68 łi#V.bOӅ@> -M5̶{? T.;WL!~ߍ< PS05 $Ϯ d:Ӡ "Ѩ^zC@Ӵ g}^\gE{ʆ J  Na5囮^NclS6 D"5 [&*) ^[>UGŗ\牐Hp|v_y:Afix6E*(T CM AURW EjJS[cE;՗s=_R-nĻ%5. fїw/),RǜǤ~+M4K_,EYz΍1turt``X?mngDGg\1-Kw"clZ&noё!;'Ϧ)9C.Igyh!6Kx=k/о[ i_g e֙%,:>tXSբ`AG*769jA&2NW-@ra-4w-'(%fa W}֨8ט=ϋŮdͯ>|xX]jV|O0<8_IYZ%ApRrxJ3~ۭ?,l!ư`V@ O!Ar# ѣkq=[=.NzXf| N~1ư{{>ɗe[rb"'#?tx6<=Hm '9]oɡZ}510KjI;ױI1a+j9\T>9nCPQ ǔ>=OIsOU.'|sS7nc/,vqW:]j>_6)FݩcU7ם.+g[n !bQ|\k=V#je7]qDɷ:{ֿ tHrߍuqʻ?;~?5v_\tAzrGPA ؕZ^x=%l51Iuio ΙR} ݊?|VU|u[va0Mj)McZh,9rtaџ$-1R,WDX_7*pj_ $ln)ɎQd7ۮ_~?P25jѬ 3U?38fy+zX[▥Xs9Ӽva _?ԆSneiu"*w1tAP{zs8|,A'2\jҡvnO=K [z+4XO3Μ(0 wIG盃I^ySy^m[cx꽠^LQb2@!T0e"6p`3l:F n]Yg;d؂y(+doI60uG| (YhXnt]Ȱ8 |CJd˜DĨ) ˂9L5g(G}JΊgH,6O@rNI(աYD On20en'C\*[WX6Q7+Ug. YFʟҧb:3VOTT+O/M~M$~ؕpJxn^T?c@haB׽]i绻㼑V ?Il8_«]aEq= ʸJfڳSCԶymjQ|k€fW%OލMiKOSH8HI=ݖM? E gB9EVzH  VzV*=uɝ//=$t[ja+p+ERDgDqCROVn˳RwY}KK=}#5T (TJI=ݖZPhwnDRB;^ovmj M&vO/KN=#G( e1~chz,]^5;/c 1js18(uGNƨt!MħrTO4Nًo3"zcf@`3 \=;zh8QdN+P۽܋{t=P9L9%!D׊iT(#V:]'p\a8Wz~M샞}Z2XKgނw`}>7*^ HS p9wnnԄqX;k7g#TT.G :oɝΧ%Nyu8[/t2T͜+T/{`$ e?1Oc{kFԘ=^ CAǤdꅏšu ~ҳmBc}arK4E~VL[;aM쮅2HXR2ԌIg|ԙYf-dW ʬ]|'9f^dO h=?> A?!o >| d_ߨ!?.{6e}Msp2&+d_ vD<"a Ԓ_peXkyUWrXXE0gZ+N0zBTB$Փ4D[e8|n60yFia(g vWUaإxQ`Gp80rSLͰLjջ 7\TEnD)Em{t옞p(Cg'̄ӵV :nc>e]0%e1K$+IJS(wA$1xaWRIa<ś+-Md{% fEI+dCA is(@RJJE IJdyDVV`]i]sĄ G]8NYQIVΩq`圥ﲒ0EdP c ,⌛,欙ǯn\V!̄Gꀺ9kQ)+l~ZesR?z-5Vѓշ:+EOJjD/J Dp9_ʣROL T0?+RJ/JJa"&VSk۸iw{=wtƱI)&*Lz*VNq.Jº'V<iΈÏhӀLED@7 QCvĂd%:ڻÏPܵjgR ;w=6xXK+DJ~;Wƈs#Tߨ+o9}߲|wpwJ@D ? 8>5?Z<}\DoE"^u>m}׿5 F5yP2?$,&1Qp,Z0)b} :)Hobqc:TW-XS SRGjIRUz=TT9hZxqvEt!MX:-\ӸwM:')M#Zs7K /˿ffaoW5[(r fR5j\F?MLnB=e畆!$CP 豈ϋw yK˫)1&;M1rGRV1`wؖ5AڵVjgF%8QsJNs%]lkv }DBCk{諒{B$[L 7>_1Lu =@"b{ z(> H_R7/h;Z1vtt*g0D{x/1wA#Uö!3"B^p?HD=O6~S.H{XL=LXQE {ӂu߰~o*#P %a븑QaKʳ?j"J01%[kfj p9eÚڸ3%*S 2ajh\MfY8NEQZL2_ 3[gu^gcS`B2Y& SȔ^ܶ7}oS }מ*;gi|*$$VꞫm VzV?r9+%%CRO;6XE[_8XUb3Rq+A Rd~VU#< +EgFp+<+~|FyJ&B*559?q/<U *˴I RSέJt̺;.+<Qq8 zŃcqNqO-Rk1HTW6 CBw;xڗv'v:`! ?uPDZli-kkX6($Y7’TjT먰X>6s0N^BR0eeŠ$I]=mJSyi$12HAӡ`4RK'{9E`yK-B<=8c@{Kd9U^s':^Yi[c|n]b-^-M*>9Nw_r{ˢi1wێuYx˟N~'\qҪDy@74G8-9kj^46QPgňQ͚u*"T-Z*2%_X\VWpՇ+{|rZvQ6^v?;8Z+"jW8t[{Cj+ſw8azkwo?8M\I䍍`A{H1Bu|ӹ&"Ldo">ER 0MnكL_SRK~ԍNڮ1@_Xaee'78]jXMo$2*F)(7}#EOpbGKYo]I]RpWɿlWV)vibWqO=lS8w]/靔3Ya;(\=8rWP˧o_^}v^'׫wot^y&K*Ya҅Vx8r`EXVV^M`SJ ݌~}+\42<%$Ѝy1zx^<>/>]Q]}Y,of7F^Fo:gh=6̬|~rC9N?5P2GHI_sսsF^p-xz[,Eh6KJ؜ºrVCK1_lv-,5VaFr^~م̝cQ؉}g#γaf.`m|.jinD_ca9g~ӯzե񙺰$E@]/S:`e; (6 k8VuTT^A:Oo2ǔ} 5oeےWp]mo#9r+ 2Y4.YiͶm$OnْԒbck;/rIS,ŧР!),2:+ ZdR{ ".I|ȟKAh@{?2D+")À֮Z=<4OmIvNuaO<3Q^׉(L8ƒ rF/_|_RPEQ4#) *ʒQ&3#QR1̪sB,->tA0OtA0d۹A;-ߣd-* e'm,Yg\`RyC sa 7=1@1 b5FZRVc ei:P"V"kzO"Pbĥd1F-9bJĩ-գoK]sgkĩ_ g.;QiB/_ z}?[3!t ^-Rkx.ͷS1dSE~"De[,wzbc YBt$yvԥ4Oq"uKe(n=d)KƓIy>ʾMKv?̀e/(' 1|eŁEZq`=|H_a-%t -Ð0{apU Va%9.f0kF'Ē^iSjtcc{2(XnMӽ)QCeNI-=<0w&ڠ!l=nIV2<8DcR:}L8$tk6p@tfT.gM&#(" Đ@,lAAQm%* l $ bR@B"L1kF̡8( i'Y+&v1PiCr3~_&d[='pd~әy/^+)He1Ng!޺g|z|5$-ExAρ}#\s;OJOˉs 䝌ͧ -ݍGV’H5UmSg)~z"ۢie?W8TL+~JAd V ⻗Kt٨$],+0Idk=*(w i߽';$RMO٪|f_EW DV\T0#)`idfg6Ad>1u\N.fo yV0kjqfS4R#k$ϳֹ PpUȋ7} fm)cX\yg .az 2M:|n8s#Y۳ `?' ]VS$]zs зܰJ`5Xm-1\ܽ=Wg`=\w878>#+k8mE8u&(׉zih%M9Q Z%彞t IJ=!*DJK{=l !lӄK5ޟ=I}kpbJ"zZ?װ}rPC ^(2X(^ݠIR̽s1H)Z5~aH")fݹ~:$St5.JQ?c|kܦH9$s  ĀJ\z|6*I5BBbq A^ٲ"M!%VZP-0b.<Ҋ(mTRƤ,2g1&[F-q&! A88Z EEA cKݟ 2FJT~+Z5'nkrjPsӢ&Q.+"^y*5b^cR#l$i@P/ThM./bl}F}M ZZL%Ɨ zkG3_;H\X35ݝOt ?nWx[xn^,wR;(1xHg$(z wDPv~'H׼Y_%QqXmnِ {t>l`O/}tkq͌ϯkYuI7b9-l8{/8Ȏ21ˆZD̳{HDTn/v~jjY1݅OV{zg4LD< 9vj_\t,ཷỲ׌8W?֋<*f6W">T(b #>]$ Z-믔si OL9*}8pzWLrKRGLBpY;2xlcX~pUY?_wڿ?9<(I8Ͷ~+$S@{ ~ Ci[e:WDX;Kte تHQJ K(Fu>s[m@|SӦH&^KyXU9h 96E.@eXCYdZq+@*@5A/CXVe?+? ^Ҡ,9/3;5r̬kA9%k`• kֽv>P刐z_S%9*"c)s lj_^giq.ݸ6EÀys66ˋW;>r.|11etjm]i śs%cWbuZ:},U#vQry֑|rҩڭ֍uAĶƺ]rRzn n]h'W("0xsߺAOuAĶƺ]rcY聆Z.4䓫:EC{]畝^~Xmz:>UAՃ7dejT/n/|j·wwI9)ڄI>MU^5Y'~WFyѮSK\Jo8f}2sp-޼-o?b1s[r{ ^7,fJi_Yez9ܛJ2_1BVFRT[/7O1z] !(clTS*23Ɍdp L9e5y?dr>c!I9_׺*|·RPx<'S{FH{"Gܘ ]ASwK7Qu_Xf{p_%-\Т-edLNTA.tA9urK LiIPc}]({wWclxnAlvndD5\^3L)ʱ.bAGJ:\LEKˡ9ŢP :b#x!Qݧ$wy#K )v3`^\Kv(ʂ\V\˩&qB5v r|Eܒ%ЂBLr TՎ2\@_N։s緗{8^}~_v;fVw"Fv1-['[+Tע"yMe|l72.V_~uǏ/U慆PpqH*S]EVA>,|4,(Pj%/JCPjt}B9k@eJ5.Y!scP JoJDnHo>U0TY*h95w>3Y!OԹ0ف6vY+]>=VR?^AP:KrθN N@TjbCOURjTF*Uw+y}E} jZ&Vխ0LIxt_5*L]=Ӕ} 9M:\KKMpQCU( 99B@MK8F>M{DQB){Ic=кu!\Ewt*!#31%uX%u)= Jύ0B],¤řJ2D@ Jݏ<1$f uȁ#p\n)iQRFatmE[: m޽~әy/^3cO-Ya({tT?gO; G@ʑX=< DKAr8u7-jw!-߷4 J-(LB~ t'kؚu) YG=6 Ht *>Ycʣo 57[lY>;zfKKDdtHS- fsܐuHzNJLԐPvDn'!tč;ܱ.W"tcγ1;8<Œ,axhf ͝{rL ݆]qhp\z;JՊ# N7\d۹nf% ̢3,FI3ۚY<3 !MPimCp<@<&\FQRdܟsb^t d'|(|I*V$ -K&I*%RJ}}$R024I FkknEj:_R85U=3].I&d' bQ2)B"xؖ%U(?$К%V1jaDEP$T'Q9&3΅ W  OKZGC߮˥":Br CuP 燻p/J/iâ7p,xo6tuAF>;ywfڦx%BRuGi _R\Ͽ~f*-(g3Tr/L6KRqT_5% 4Oq20%B%f ]"%Քu%夌fS̞kwm_{U /I=%DT5SbLi2D[K3{{\O3>,n" ޼ICl* P=]_K|sKMN KrV:9_]BjQY3HT/3a0W8F8㰂tuNO'^r(=ոO,ѭnh-Lp`3kYqaLY9-rk-_6*v݄7Zx6='vt;L,q^xxl SBDs=+c;xQY$,_?Qi"r˵礭N+ ﺠ+@]s:9ctd9o:o' ά:-feW]Nc,RrPV օ( SQ]sŷgr}JhwgQV$l͹]Z㺾o\? ]N"ٞ$j,gNQgm-vx_ցPT~U ӢLZ؊k:RCkYqYumk ۏzYTC.Ezw@Ц!gB*C S⼪#YꎊCR6!B>#WBl-؛H#[F Š21' !rXou*f@񇐶G_c(Ř\]g܏YVeA>imp\WDŕָ0|",@\R1`۠-0G`t_ӏJw[w~ⲝqI#W!ys0 x'""+H,~+DM^^W؍œ\^q_OQ y[ǯL%n L%Mގ_wN?`ܸ!܃*iq[N{W]5A@IY\=Zwr^J֬W=(F!W^"*h\,%y_ kNcNq:ḋHHxjo|Z't TEkti1koG3VM(Ml1Ria,!3ߋ+cGЈ4i6~-gOX1l-{ I(09\@Ɋ^B:#/cp" ?ORG-\O/gl52DIQ%QaBD$C\C;azq$S͞ȡ5׍DMӯt}ճ>jI׉~Ep)"5^CFKL˕l]Of`Dj .P8mŔth'e 0 1ي$=z1Z<_: J7b2gUuΤBEy Bm#X[:ͫ<o7{6bB/cGwpJ*Dӝ:ZI+en\g"G{:ꗫϪ.36DekH .mj8|o)>|y$JҼw F+LHHmUR6D5{O'_"e"{i萔"XG~PXAWaKL!h~OpkjS/`(!w:5 `N+ lep `z~uB0[?}-Çc9){sٌ{eK++XޗF~x>7?݌bi.Fģ GOpur}_o6._ߎ̚\+{<ՙlOi^"«?6g>5 +uIz8LPD  Mb gUyNWd俠'3י|KYgZp&Ef;qx9ao+{$sDd;Fb{zΐi4E<t9%TH* ѿ(|(ϕrHG kd2%ӷ[Z&NVOb6:DlD0Bb9aS .>M7-hI='2{]ET=)fCV4@Ihw&IaUdȟzŚf}zt69+br n!3̚ aZ#>+>\ ,ٰbHld/c8 zB~?%\şFYOW|m߂zZoq_~ "\ļ*/Y4R =by/VqØ*N O0,@* $I%PZ0 f#)b$)@&D4NyH}>g[&931&g plӨ@oGWi922ՓQDf{lj=hI2^}U/HL ]-h!&+Vb!iӗe/CQ%nVuR y/w7ooQ 0ˀ0RP$17v-Os˝;@G pib JUA Jф(dpCG7T[_(Cv((((H+".ݒvZO!;0u) C*3Dn2 }~t LEaY+y1+r9vC7ÔUV9]YGk'/CS,nLU{d!ߔe ON`|A/ \ R~ekhx-)r/lN||e/㣵3!CJR"e˔=fX?Sv&4 k XZ4׼93\9!CEA`o%JoKy7pG53o]NpӸMb 2 =PmRpYGS]cQDD-6UNs)ްT .W'n-*}uZ՛_&KcRmPeoZiTlCsw^EEw ڻ$ƹڗU9A)=&'E҂ʽ |TnڨL z?n~~͘e4^~[7C P 6%A5WQB7د7 R)aT1!0AQX,bW<uzo w gk&\rB\'RiCNN99⃮Z99LzUL2N0i7j2Ku.@I9~\{Gb8EZ${&p:cGE'9.!ݲ*P0֠ā=AqbN%<1GyM]]]YDB,a('{̐l_shyU*\HTc܋/h9eh]+3> pgz6_!2ɰUb@ \q"tk-Z1.UԢ$UnQ:u,n;Amw6Q銠:'6r!t[eS˜ܲIOi 3N4]T[~9:D;]]¾4ǵw78hSO`Ҋ=w43Z:=-}hg&{tpAk zG_X\G9~l1u:$[-l=3xAsīn\psܢ=١/Wi@*]?@Iz 8 " WgƌM^(Hሂ>Ow>MƼpDZ<@`&ܫA#3G(.K9L%r;M`?Jdz gCk͙`H͕DTX!D"2 P 4'b!fq 1 iIdS\`0@(]"%7I͕$[> %#KFc? 1kE_ތX ض; FޗhL HE/hU2КD߼D&?23C#!}c".71-Atv~Ҷ7t7!/ !PK}@Hduycbn6Dg~}\/ͳ7 !XEg3;,'۫=Ige޿}> BW-R棟'! WľD( I>HgNp衵/.wn}%!rIG&|+]NA6,GrFBeu4 9D+)a:Ff0F4< !uO3nvΥy׊Tm0X 1YgqҤ FҚ"v bNˌ J :Ƅ ퟣ6o`ǰެF)R׋K(mJړī"yuq<^rr_/§? -#-lgˍ]Ph! #K"}BDD2c`^./4grXc/gy&9BM~݌!wOZߪUc*”m nlv,Bx~|?Fߍ/j1W2y-vq( ];yNI燴9Ǵݯe Qܬj+<]2.n<[p& Pud/T(#53Սo:bT(nRynj ۃ s1˫88sf6է Rr|"" 9l> Squ(:] >85"Z@*."\85L8ܛC"* 1Q H L; G:x*c6^$P]:P$K#9!2HZ> X4n6u< -;ƾ}b4:{Vz}Wzt%Gy}QDtUY:.]@L"U@Ɛ) qd;Ti 1kvX (35J۝"GCӈ*"OQ!C "jc acnKsObH]E' 26>/Uy1;Fc57a.>&CaP i}>uy\֜e[We\|{̗(0VY"s5敱`D1fW:@XGRe. QP+87+fc .[D}E"΂@IY nŲа/ @Y |,o7wrK\[y Qv\ʅasTOJǜ [Sr<{k&iTb2?;:lw^~S!~9^h .7Ut4pW~pǎ[B;] bpCL ky00ӈ3( 0B9@XHQ0Xؑ]N6h:^Z2]MAd4VJ#̴B@k@"@X 8 jFG*0BKbK%l瑺(?s]2A^ny"RM0=wQ'AT4JCN yF!Gi\, TV $Ö́KN1/m2P#$L! SK>e%5f6(RdʭJl+!>Fj '4p6t6l9 u\ѯ!T[ H2p9Jzz($19d,A­mu>n륆@\FwV$6TVP"+g9Jj!5f$IٹU8Sҫ4_DۦI'^6дҔB]U֨VpH^m}(7~JO]RO˟o iN{k{q"x$ɋl\ L.zLT;QK 295@2㢙?'(+))&x6(ڲ+`_Kjy3_$hDc}t믥1u8Ϧ$?(}{s=Vɏ?ǫukޒWrFH>{?1,mׄ1.+I%3+ Ξt*P:CeZwrSԜ°&@ [Y \NgnsN!iڭrSԜB n7RLS|+u`?l~O3+9)Wzk"n  W1W82-1u4\ظDJ\A(&3eJV($@L0+D3)@{ט j%\ljfg/ܧ[9?9hGV[T-JFU-2(mMqMӚ)žۏ?2fZƟghPN)~MPV,̸rޔ" BP(ѨW*Nu4)dvh(+}R6rϗ_\`JrS%aL!L +1XqP"3B$ 'do[{Zi[.=f>|JNL髷y`p2b3QDɾ=sPz]#0?K_sv9kͫaP0#n4FC2Ԅi Wd=oMp!.NVE45'. 1tIADq> P,i5V.W;S˔&Fcn#G99N|Dk]fG98r>W^9H n)@ĘB%_D( #IИ@+:m5_>Dn+&-y7qҶ8_yʜ8c!;JQ [pXQe:FAZyӬ|KʰfH\%YXꆷH2!7_.[uгS h0|Vd H}<%4"#\JvţMz_Mho&ݴf-F4x=zn`*[Hd>ɬb40dٶ^JmہV~$g40B?&Dn~glffyJh9|ۛs,x}QMw(ܓ̱chfXtA:WlFJB*'p'h:^I_͌bY-̊O=SOx]ZNcKz;e7f=|/>?|#çHMO~Ia^_bfM#ڬld]rVÛf~L+S|j `@v9kdr%N )XP);o1ZVB2 ~b@۔.;SzFG 0TE̢>bϻ0r̙ۤү9lڈ4ǨM|᱿1ВNiE9ƚί_`>6C.Q}M'OhSZ*y>l9'/[(6+nn>mN]- `]0؜y7QݼarT߭"kn속wIƻMS+l[L^o;7Mn|1j[Q!lν^:16ĵsj "*_`bCg;?{Wܶ?ygxь?O|u&q/i26HrYLKM,5j{go ` Ye7VpLtUW{M6\Ty5u]bLֿ*Po: nޚ/Ұᙎ¬VUaV Uctk칮o& Eߐ\5ZmzTM*랯- [1(KVyWBÆY3e *=fKkmj'{aV2!&Y0 ?fHn$8{Dx}V}vOch 8"@T1ީEҵ'Z C.]m&;^'Zb@fyPz[DI%cǵK- y1v0;i:sd/adm4xmu(bLJr\[u=jNuwxhI)n4cc5gn%Mҿ QǑ"s+B .`8k\{1JM`Y(Ci;+.)Sj]fS9n%7v 4+wq4BAndQa.҆'*$cz@4]%UJccSSضĈ{U# #P['aI* ƻ č[/؁Kլ{;ġ;4/d&?%o֬y]q=tx"(yҾ.60LWГG_NAB۱T}  -0;HHWT9d,]N3T`8H‡+ ǭaEe(Ik[8 BQsy_>YX.}GNyP'4nn]r_x |3.m=t,=5(0d@yO7ӭ,ak\? \ª7.2֐|"ZGP[%9ݤovAѩUm3OIa`ڭ~(PEք|"ZK +-PU!aovz-yۄMX}$)-Ia>_ X3 <tP+[ɒՈ ky߉eqjn\Ś@Z1w!+clbBI C1baKbQpП7bJuLF$Nk3Fck\?yYJm?^%dk~) qVzuJVsꊮ }(#0^;q7x!f#jVf'b>|_4AjU߷|*P;{{S!H77pųM՞ƔPu޿1oz 5.I# jZ(Tj ~?zަXqY~3r?&姏&%&+n &iϺa@+lL + sBʕL-0*?8tr+G 6bc0IdbAD.PCEդ"$* Y:-Vaۻ1([ 4O>]te>cv*xCma((Qqha;u4N7"q>3UZ@u_!%Ո4 +~ DS\c iaҸס$+:L蜒O>a?Do߮zrߓި: ,Mc T <>Gy,TGVqe'_(0~Q?]?وO+qe;v1|ST!%KS;F7~њnqB;~mJN s_ r.-:w7JR"oaV)-[ߋBWRc255o#27FGEKSYҡ1sVpoHa b"5vĴXBEM6*m9L~y(C۲_op5&)ZOǞ=CMfxSDAj]#UȼKbhIt3k׽tURv1M1hIHSk뢜Pd7I(g)g /Kʇش -1V덌0V]+wvF9"tc1ʅm"b%mhhYhơ3ˊqY@q 6x~[rCצ:z֑:iNHjPR4pZrF\;"ACmJ-R`NHeu1XSͶF0PV5{9化U 1:/@4Xi 6ͬ.$O?Ġ a=w7||V6C޸?ҷٯ?Nr=h:0,I)_zM8d$:8n0yUFfvi noGq;ÓQ(i Z0{6Jl4;[GnLO604n?Kp y?nppfx3󳳓_x}{:y|ׁfm/ xӯɯo˃*Fu:tW,JǷQ ?ׄwk T,zaͩ8o7'go'gg/ x~5o׽/cGFcƎ)Dҋ _Gc+n-GMw~$cOi).^7YF׹zg!{h>F}[<~⨿{)g:4Ԍxy2J6׉P0k\c*R0c۲Ft2l^˰zYh-&"5czk-k`0eFᄷ1՜pG*f 2fŕ}QDu_>'PZ "~_ Jj"{j&qV<ߔC!siDkRЉ2`,AT阋JMPu$qGf[ [t1puCP ,qR;e=Z JTLP}FK 'T(*p]ゕ.,ʹNQ`|N!&] ͩY|iDQm\~ʺuLQ%BkZH?iHy'#Fb}2p?*h Pq聰JkyvqDH2BJspb| @]rQD*ʅLh˔#E:cA9Hdlmc:1D\rI"@BFv9<㽀|+2a(גUAN{{@$N0r9" ? 2 D`o1㭳1 Dɔ.F.A0=.q.; :&J8: *xRZ^ 9ܐ%Y@sjI*kc⻌k溠;א^h!=msF1 -SL(Qz~ OH%ÜϧxfL+%+ C#Ө4!b½O,GBl V$2xc8x y#ZJH>9FD"D'8">'4oB`G)g>a)E(z2R#YnLXJ, !cp!T|UΧW9Kr \L-mk9* ;|c@ʒZ]qFżm-AOWrLZ\Eˁe3Ibe%7EQIAk%B;k[.p I l9v@ ɛ~5(U*L"ZUǕ2d"e&6Ԝ\Mf5)cV5w#O.p f"\z~_$DfL0_MrcJV87\2z98xe7Q w-LA'idѡ4O'r;'39Q(,km6Q*Dp/s$Ps0F0[fX|Iǧ޵$ۿ"]~0.w0HLv܁G֬%Jrf2﷚dz5&E9IƢtwUWWUOsj@ˎ|;Gpõ`B\x0 F,Sylݵd S`#o-Ky<`a $9u ֊6q :/@BN:XC!P'rmCu5)1y ZWqol#S囇<^s[ dm0ڳEBHbc~?pGK*^Ndu+}bA^y%><7ԕ?^"vuNݽ{Ja &Ow $0Ń'5 ]X+f'c'Fʏw//4/o9hyUx3>WzyNZݓ mso2}~m^2# 4HRhC*4(k"qj4rijrK`5Dh.ت'f (ב6,M hqAg{ttdݜU;1"-jf.Z9Dÿ^-O`Ξ\ғKq[&@-Y/Sӛ̟_7 9@VR_R>soi Zhح۫C^f *&A4I\,Z K`^F>M y}Լ.RcmJ8fL4t k*LF֖~4y ۠ą㾎Q2LlC NDB$!fX).q$ NIyT5?n4kg@kfi@q&/W ՎtW*)0Z#@J cv5)푰rtalhO  ZW uX mb d5biDn6K΀szfWk4 .DŽkJc4 U2z< W=6o /F*Dh1j1NQ>DlF ,.3Qm=%̌`?# Sdjp8'UTW 9*iM<QV)FMըT2Kxs)Q@TMw Iذj?ڪbnJW/SN"<;GP?K nBE4>O|Mb\"td@ ki:lVv+sZfo߈o/G5OghWvVܨ]NE޿W0@'xe\g= 49ȟ& 3z"!7a@e I&G`3=pf/"ٝɞu_cثneh8gONv1KP( %U1Lwg->o=UIB~7x*OfϴW8s,1I)Ӫ\-;"8A+Ѯ++yK@nZh`Z͚?:fhfݬG#ܬٔR sUݮnff\t=Nf NN;foݘws&Vc(&l'4 Ċ1qV%E&EXBUKidb/󋉉"#3Gq(qQ 6;m/$bxY~%ͷhF/f|&(vR2W$2B(6) ~˯I6hARffBx(3R u*\B ?~9^DM'5%BSekƼa+UZ5k/@㤮Tڱc *GGjAGKMѼ̠y nkz;IA|r-lk}yh@CJHP -Q]Ė"ZTol'wӐsN!t1y [a׃\ _y2-3vY~u x9œTv#cla{j*۪{Hxm6*PHE}L5mt+4)gV}'zߍ!Z_Q?ŔyȌLT )G rXR2PG\T$ƩA%$>GWF/%MѝS`=e.veRiߝwAyvu#HJF%|{{pd1fmnCbј~qoRf`O9hIvu#ژQȺPGaGfZ֢݇ϟ ,jaly bYu(E[N]JB x b̵nQ86Z vz4N@ڧ3jlPP L٠ma㕌Dk9K WɩZ{@7zQFvONn*q7&J^hQH^(]E^D-2+PLj)=CT_z7tRXm?|{gHɆX4(Tvlyy~A_ oߣo?XX/Jbb^C""XKa[ڝ(pގ;M4]J<@tB>hXF{gF.w;ފ㹬HR 6=MFk nMvf!x ;WC\"z,+3NȫEi2wt!Z ]Y4CZQV,zNѴJ;Wꍚj(7jzVߺ!k᥸kUdިjcdߠ:+JbPޖP )}Tka)O4(L񀷺ykҕj+*i˻)@kxçw Kb6.rN 9jB=$e 4>l&a,/ȄQoop?3}g볹Fl\ٗZalVDf67)OeU{[UVU%U%-5 9F<4_EkN xݒO~>yj}2 šHH 5\ih*! HRj1aiJ(Vai 7֗u\jƸVw>ԀM#LKę zq*J,0vE* #jb.ADPD82Np%ahXX=c5\woͿ{k[h hm('G1ƂтDS3SؔHL ɐ&(2 ױjj//8l@K(Vö4 Inaphb%P)&;^ sQ &M4$%D0&@mL*hIYUwdA Vrl;;eVXN )X(`?޸)WT뾡8LRRa@E<4I t"lbx )j`?&A tp͊ byhGԴ;S|!+n~P+E 1X,Ih(U,4!,&+0X#6*1rmm'X4C KlNDP =gmY2F IdΩZ4A `QZ#"H]=YCl|e&#XmrӯVAfĞ*JS0%dcFV]DU|H*8ٕԩ6ɰ ͚bx3J wg^З{[fOu }dӽ%woWpF٧/[aTX .?~>xpO~`OoG ]Aw}0Db|xyz-M>Aup6€qdǗn@1۽_(&j f~{-mJqD*ś9b 莡z͌ւocUۀV)(Q ӥ‘Pq,TR)esF"F2W"E4a`0 6bi6jInN_s~*dP3Y` fxP‡M}}O&=[<뫫[;PT \6r|BmTsJam0"-Hs16Uτ6#K100BPI{J`C"F{rނJhr,E= saJFRY ޭR0VJ;5<4,T~oM͟WFK?~jH7DU@SAP_ k@8Wg5p ˻wT^MCХ"T TO\*B_}ޕ>m#E56 UWyo*Ug瓷@E#Mtq^nJ )N>$uh1078gn:ՄCu`9'gϹx|d7h]Ĕ3g"ph eKtiIdH+eH,Rҍ'Vu[öI]\.+_ve=c%̦~\ ۂ?7dz3@]~ׇz{;}vL^?ݯf0'cVM'ɨZgnQz*G޹rځg4 G.wơܷ-}:'SoٌnJfM/8k:hky sDMkc'#?:/KTj((n \|:BtKm;͍GϠ67hh@=4v%T.6_j̆ HR&DcrVDhphn+wbo}Bڦ5T) uR)mRĘNSبR(5᩵65KEBQu}΍p.w- e`) !䐁wǩBL𢡄jh:h3,X0O[~ ǫ!?=;FAyЋr7#G2LT*U(%ew.Z?s ̵9"F%ow*'[ %V)H5I7.Q2yPڍR3jX NhiޚvQꐐo\D#d*4on@(V} Jg(R˞pWsNcҘ50(.o~m,B mS4^ ]Rh 0L=Jm}bZJ ="U.z=(3? pz0ψAFR P&(-q|u:\R]#p| hd[sb20Ic2I9'L#xi@?*>q L=&ؕXpB3OeMz:$A-N;?ì +^M+a~d{Cn2P*^s^`f=l#|^6Cu+1Gw;?ؑ ]G\2DDS]/fNE+لV:L/h7nhٔq`+b!HC^{PWyxs1ure|7k)U*;S]N2MKHb&u "8" ō2^f/g2js FIpW`l?ؒT$`rfH&,E%j QwADݕ%iCe`K2&}Z PSC2qF Mcj;b|]&jM7?Kn`/=2@;_^|6A?}dRѐاY) )wy{|eY AV`s0IݧߞغрdM'$K/ǜEא6h6}M_^觠77 %f0]o}>g} aY6<|\o&,IG)*][daͧϿ >扵D[NAqFjK7ω}'H\^ҧt,h6<Z "-m˛"E3r)t͂qݮؙ֤2ǙNJf@`R/!:(`\KML5rZI|/jy`E}[` Q<֚[qF J3NъXOZZxvv< .cJ3JHjFbV@d%4I$U2;$Z8ʩd(LIWT+6W\]t^EhR.Z9|@QH(/[mn@ߛ|]WxOo9ُW9+V6^>07Y) ׬da-0qb;Ve ɫ#43,Wݚ7Vh&!(,߿L#uPM¸IjJ=Zvju> 9&8߄1z@ґ3 FÛ?/K-*g=Xa;W>}=3_L?J,bxPwAƌ7a/>ˌS0lSR"14R9XJASx +gjΫaX^MXNs_a8lRum U?Ie4Q H*;g]acOor3~ЪƄ#(F%$ I8!R85 YϰOR*a XM/ w"֘ Wߪ19_c3P)p:]i:կשl/}:BJ <+7HYF,eNXMSE(V4e, rpNq>Mf_z^-~0y/~J>у{cf KdZfTS &Ow9~ * 16tFg0 R1&#8j8ȂLHbq蔰Y8e4Q0 ̓OXջ4M l#LYǴ`,\m1I9,:X gXJLG~ַy^ 2%22C$FdJ0$ܦzJ'?։u9)X$.dSAVq _-~AzAҰ9$pk-Yj)jCLr}Ø# ?7O$AT}Ī  :t vaՅ% +YNhJzi/ րQd^@ӔsRšEI7qT4z4?߅C §t›4tǞ-nV1|,~%˩ Zs5y}@| Fyx|y "I9}t.ogUpCE_gf2\Uwn W(MO^G"M.qR7B.,QJuLB W' vE m놓3=༫8MI&ş.*g5VR3˒?~q;L-HʈSF'XY0YD!bvxN+z&+7U`U !I=3f@jGǂyawBV4NDZPb U~C  Bg|Uc dhs2tgOIlvj@L׶!>I;Ļj*Y/z {FP.8~>PE/udEuJE*c{dxJ!Ye`_pQY$"ZNQG=;Q:5H%X qѡnk銰kJCGAףvJQ+"9#+S Cs2l#862*2sIM+[+ Ձ.'u> Uφ-OMUEi*?W*Pz1t oF.b*n N9}^;2䚲 ?fV%$J50<&-j͗wgHs{`=.ClPY[R Ouʪ]VƼ°BpZoƷ/Uj2{73 9p>vySX|cW7w t[K[OOέ-m['|v2j}lZ IDF789d[gv2sJRJ|| m(B6rQ20ЭܷG@PhZj5 f%AhU:8ppNxqƼWؽEoO\/:Ni݊x]R ȓ&AtdU qqS[M:}|T<=7NMJWM*Ȩ68 xƙx ΄DgG"ުA9Х?nAE]º)P'+֝Ep:?\5tiJaײ]]MXC^ M{J.Ps_Lu1 }7֡>T:taB`߼K%6uIvGP~_d9Zά񃀂5zV!t/SKx dFP97lrM΋mv:ٻ&7r#W|Yώ}(cc,i`;MtpamNd5*${Q/yH|cH@]cȶxP*}o6eqm\%ݛ?d8n^=Z n]aJ5])VDwkU SΡRǘ0#B=$0S.eJ2#ł3따! )\ IuLN$X}t<1bd'~JqGyMEr0(Cs+R62Yn YЌC L 1<2ꀉ# 3gp,kqAmq,Mv Yd}~^^'7 OX,L09mFQ=#Q+qYJ쀛%`H67Y%i.6[S.= ^(k)3yB`D G0.17Z{UВ&-mfVq~]of},{m7{nvpx7_aYDr5yqˌ^ JA,.^}1DH1ޏ{~/oWxw@0{w7?RLĀ"NY,ONjSq=XVl/;o9ʈ12<cR{ҎsNHg 0u;dsOEz8K҂88@{3x\5OFN8Mձ".V T*l5:=E`R)WRF9t@ = C@`kIHj p/;`XAX5*⸅qR%DpB ߌViR"*DJpXDVℿy|McY⦿,G_ =EA@⼸ p=#IɌC,>u!`+Y! qSM<DC<.>L=80@pSb揽{ do>3cL#dC|˿"Sxy^b'̛bZ%vf!Z>cm$`-7G}Io3I2kNU@*$20;D[N.>T'Y"/,(r$sij%& 30/rڀӓ2o I&&E)BlAS$PO~a4NJ+Q@|‚2F5x\&ŠYdXNFEiY,,Ti?AVDrT8,$8&A8bph N5s dDRZ f7Is&D rĴidiJd+Քi^1}D~;8xo2pPb ٚe9zuAzu<xt v렭<_ƙnk>=xxVH5l%m},zI*SxOhS"jw8S0)xVb649!Lkr.8X * @px6٨n(r#]șX@Z, RJy\2cTjp[J!H)ND˞A\7>(l6="ҟla;d(RGek9S -yyAz깫*CQȴUYhެ82P(+\ƨ{Zc^] GJZǺrVN5 SyU5}uTˎ#[9C40p%i施IoRk&*CZq]̔^+էN2Unzy VI rUZY\tyuڬ{i$T_TuJ끪M-%Lw k[%3L:؏b9bgPY6ech_kqzsur0)."ҚH Ga;)AJ4i5-Q͉kEH-Mj t[9js_w-6Wl|-ԕW?uv_Z[2v[[r[sk p )urQ[gËbzus(!d֭38OsJ|wr+VKlAh(85ܮlUѷ5*RUüuLM۟42 t)O ,`8!Ey{y m$Y"ԴϞ .A6Z"*_mahX&Ѱd"_-'k~Pd;TSޣK5]V1]- iw6+逓y<;Q1υ2{BoHӸH:ogtoBHjm;O4;: y(?ѬHc*ǥ-;>%LߡHdдu xT* KnO)z9Ydfڪ(ΐwH<\iĨAl2iS9&,RL88\XjKUbsߵ, SuT*E/~*>MUΧZrpc«8lr/WUYQ!tmdiyFӫWJ=(_jWп.x0໥KG5-y>0:3R{?D1}<(bqzUĀ㋳+_sruZdyaElXPbrU[dCyYWaŨ"Z!_ *qKp0%wS\ b敘{[^X ݎF1rvO;6eҼm9Nu-,r;*b\`qQ42_V1`cɡCu{iTh~\o_r Caf e\uYxz?l=泻,kɎ@?Z;^N$#58ʻp.r nF"IJA(&M;ka4P[Ηs)X| #eT"R.k)ԙϦZ -sB)+>j0Fϖ!a8|c&𕟖E&N &YpuH4!q)&u!N;}Ez78n%L "bڨ43Ng wΎ]̓mwUN]ԃ#b-B;:>)4zqCNtpg^ ̮@ iL^yչ)FͼBY]6 io* WrbR)Ć*NwתϯWL?yAQgՑjsb>b5-3*:#htU -;,9}KW7׈g4mΧZS\-Q[żf}MA+?z~tPsg-MUC1nJ/<:՝/oN|_hyxH׀JWU"T57ѕ׺^!~vOT]ő#ssUDV _?^{~kvQ+JEͨ~YSVU6@q7կ5,+ 7c-B55{%XkԜjFRe'e@ȋ*S3^7Xڪ([X.1Tɮ(MݗR&#xTb0yG#p8XI&ŋ!X.Xc8"!<*UScdPE?+7Z?r (ZK[-R8o|_ˉ7`czl~?~isҕb.?d8n^=-T>yw+Ypj1$oFdc,.UڕXz?df`6Glv ?~ lPm$v~4~pHoCzMMOǎxT-B^zHMx)y0iIq<2_轞A݆=iثw1zQ}6P[|Qo*lB+Zˢ#q΀wݵJZbP!Y㵐k inBuv\"Rb,.HVnX(A|% %B1r=t\(j4eʉTjÌ$5 aB`O1+bs 3âuPe)HxtFF2R)tqVLbkVZ2a QF6MD HD4:@KDKZ y#5rRFf?_b$ץu9$cR SAE넦3sC"0ϑUR 14w vøXnj.Dt6uϭٍB:d 뛇ݰ9W>G#_>M$$?xB/Y oW xv7}x !"/Y,ONjQ6N}$mb Ajn͏`Qŀk׾bJr&GMDΑ(w"4<;) F$&)BH!ʜCԐ>Z綄ĉy6y)O11 coqڇ~&ۑ~g*.ۻiU@$^$/f1M1¯,C Y) ؼlѮ3pIo3Q2kN*/B.LX6іDPLGT=O7|]FK J.IffhVh6 B*5P&X"؍G Q5aq0nq %)dAQM~d܃[-W%Z1i :tfnhVp28g=sc3-}/c)»!1ZvUبE[=,hDW!88l_(qVu.n%g}keeD}5YQp+ǩGyMMAyi\ >`N^T{/1;'3ZeE19p9>G XP0qDa fzToAPT cyoO8. 6qPB N F 6 ]ɍY W#C}0X@!Pր7v2)af&ɋߍM(3W;j nܽu8śç0˫_z^,<0;d^ڙy)gWͼ#͝,oΨسbHSL]ͩG& Q. %J4lmUw2rӵr;'k)4)uFw1pqv-^\cfvoֲ,`ܠn ڞu$Q+sX<4`. pBR|TxZTii ]?NpTT (~lϯ NZ:yR=nI F4)s𠯲1+R RLjG2GQH`?IGFh.>+ %=,"#& {C⴦أv^jCpY l&V ar?"FsF< LU`tZ2%;'7J)A}㎦Z R*\ۂh 4YC@SˁNQj oxI.h36u5 |G|f{+qmaFRHkmcRSNb!wKZH1%A6gJ9ӇgweY˗w%T`1Ga\a?3''ZK>f؝|*T䥅73 7LO캧$THb qSp8,UHMmqWO߇4}(k6W?ABdw7B7Vo+%G}kd4q:+2//6Y (ϹWIԈY#nP9B1풴1\YivYC]lOT 0 J^ab?L԰VX[~䅨 ־(RYʼnVeDk}iˮ~a 5nJYG$"Q69e-И?ofYճ[[.B4{XJRMđ>b rK氷K ;jjT+y[﬋QG-ZSFkhiԱ1Tc^Kޝ[#sT:!BDWKEh1 Z3D; ܌vAoDtsFl۠)ngI0Gm6%e"!Z^ ч]waRZby<YR){r0H("VVM>h(:\~ qy؏pB4BRa HqF%%_]4ʰ˜3;YP00 4!A C= .gRJ@Qg+)'"gj}zdt.$bnOgT)?*LwHS4'O ]e]AyQ\[X\%A4Q ԯ J1iđ+ԗ!BTxҍj.T3 [RMX*?fwMn$e59 7ȉ9 TˊfB2RUNQ'%8BTz5P IJ'jkqJЩ4⦀,®c!v4lkk</@&ԥ?.a3W)b3^sÚ^6  ƳN؛a4d@Ь4rwBx; 9?T䋵l֟v3qszKYqNe:zDWX92熷5 jܼjk"ҠV] bgi漦4iV[ZvP=) ׳PS*gA;AڇWcBmk WxٹwtNF)sqT2S( BF)<5-[MelQ~k'Z~WftE (jbpaL*7W @ yOjahapNa Tɓ)+j5R369GpDDt֊slHXC_[Wd("?1a@#BPxpD@1hE>-\TYsOdGzvH>juG#Q"NQyj+ Q 8vD9jE %tZn޾tZ.-lMEӭ fp3G]j,S^ 9R)JEv*BuQ,Մv(em2 rHs,V}fc`Kٴ3ݛt8H 2#sx\Hd`IG=7ư r/yJ6&S\^T3zQ;cVoz\ &{P^Z3?#o"#Z;+J//:Q+Byb>̅tgy)x{{'"8\=fh Lm _Vvh"?#7>u}AʐP/, gHPNR\(/@ô [+v8o+i%|E}lXvd[rvxď8C(OǛfN k\'NLX6y3݀?:*sG\<NrO AOWzh>{,3dNQFIMhŠM1P37أ7q4s#>=lG`g<İΐe#^.GG7/8#D+3C~9nwCV)+s2 ?'pJtt!}0׃z50BT֞ x QQZ0o7>?IчZVek@s$ּڵRݟ DIBKI$u_ɭXSUcտgB2]2EMcקX.bJ D0.f5QB'rw&қlo=˪%ɻyQez,UI u}}h_GLTX xT8ބ;,= q҈swY9y &AN*sq ݄m}?wYi:%c;Jq6J Ƃ6b.K5GGDtpSUOi.y'S31cnOƁ0*rx~q cia?ͣI9efˤ痟0 {QτbDT^q!׿=5U)ktoAM[๫p z14&40eorsf4ًK:텯߮pÛ׷&ߧ7OoؔӯzluUo:,{*1mV@˔{ەz\O,7f\[|?.fջ?5j2{bͰ_ܽ2ts{4#kd.װE9kny`\M;Hװt`VB_֣lEl1Vb=7X&}$$f&?hs_sxEԾȬV" ~]Óp+/a:NH݅+K7H'njMNj?haOLg3 M_|zm5w|uY{uU/2SO7Ao7x <{vKvv+U{ͯѿF@KUo)s7^7.Sa^]aed¤wzÇ/ǖiұ3FBɅoGͳd,iWڧ)Os2nax`)wo.z AWk9yb/W9h:[(d0_}7fB5H=7wu-}Zyk1 ޹> q=) t%/zKB&۶L;Hؼ_-GA|'E )6}=g/ŤMse[{q,'E'<_) Q@ZՒ 05m|};Q uJMuP_B22떊ÇvV=an&<2&,q՚ڄG)c]Ĵ9ӀHɔyF+}eh$ / GL" <I#Dm>شDLMi"}ONoեyRCgc͍qxϙ'9X.ߑxbA@d7r;1 AA:@6&ٜohoT#__ ѥ`I ' 5=\i?m2*kG {?s:X|o.eD}}Vkus:wӛWC8V|M[+0jd\/+: gDK~Fٚ{weDP`RcT34aiu_޽z{#a <9f^a6_Zvӳ۫׋ُ|Y^=)֢|-nŦ%ں䃹(T&Y!ƅSWD{No;'bU`<$Zh2z*& Z?P4IR{^ (RR TGKa6ve׋opS$hZJESjvȭ9ayʎ7扶jØZT?;of~~3Yd~c!йyqa4yF8xbK1[IS[Go?ceJ_]EO3Ck\dx;B h ѯӛMײAzT@'1m mTo-D2лu$LՎqnx7Mjn:N7bېMͻ_i n] C544< Nz8tGShgL"nӞB>dۍC`e0_>(AqMұ=RR*hJx' LjRfPʪvtQ0VRS'F "C%5%FPz(ArBFRx䰔ieN(=j C)T+>qLJRk~2v(e4 z{QtM)ԺPz((27Jሻz?z~I/:}7@V w6w]+(}YpbIz1|b/?^*H>IZLR$퇰i3Yt%+(!I'Bo-b'T(qhy98 QhQ55N ft+mv6 )8E~(D;q􆢂Iqvs1A4捂%0Bsa[Qy'ZdBl / $z5 J0v4ؠm(1jԝY-/dyIA!'YKAˋ]˜ɒ i)͘"bU9o)ٞJOQ9ġhfs&&Vf%1JRA1tP":c~GK!%k9Ȼ#Y8WW9T?OO9GEF.W77u|ww 11n8L͔̕%9SVVnr"܀Xh%B*( +41\̘1\nw8r;P`yș)ҚIE3QPK1TVy_P+9 AYW}nznٰm Lu[6TSΨ P%";6<.M38!dul&1Z l43 ulXؑq"3/c z͒iJ4QJ)5ԹNRX(%Q*ޱ3:ho[hXQ_Kbo><=sA}gx6pS${M Z#5TnAck1e3 iǁK׬ Sjȵ-29p ,+rKV&#%s`hRJiKfeLl>?? +EKc+a=lM-qk(Vgs;,ތ 8)I9彷-:Tڴ(dw"TTQÕPC>a %",NP-{k_4@}*7A3&cTn!q oYL-;=Sˢ(k C,|L  T黦ײK nYxw(oEqE^eW^z]:w zoOѷcUkԸapZ<ҲnSޱ J]ݖiE+P$^K{{}IjKZ"RU[.jR Yo6@bt@~_4o{LBS@>FrCzc OKȻ@ h Gލ΀-U!IFw0Szcjz.!)GcnB{LZs$~#{p@fuI=pǔڷ-C8>gF2hYRHaY`m0#U̓FEaHak"u΃f֌qg1MzJ Tu[ 7l`甆 ?73vF}T?<5.WvD~&g ϶>󅂋jeg|~{P5#keEyfqi&vcտ\w|f7ŷ J n:?B3wjo뛟|HӬBB,_+=>QJx|S#N|M!O oje"J!rk.p*!)i0$̹tNq 2e k[d"sJf6-Z%厔8/( 7H]%r쯟vnj#Zs@?>g7\ܫI1/G28pHB`|nj"HŪ@NkZ@o:&HUǤ;$XgK+ˆ3k0s:_qkĊhM}Ko;et|Щ$̧9x1F9Ud un_KKTnHNUa2y7 //fhMZx_R@6Qf9zX\KqhrcKt.Wd&G*# 6-S"+0DJ\ .,0K[JKFVKIlVCsu9 #õPX Nol D'`^2iPN+ݔY95iYNbiMSTfT FZ>'9"!0JLVIYH,X^;ʵCaontRct#a';sH*Cxh15ZdF #<0Rx} 8RamKN!%cO 8a܌tn)e/|oϗ_gJgչ eϪ~~57|RO9_#ˏ4z;="ó`lu{ Jp)toMFRf5jjvx3 R-NVvkRd9lvNBQfbvJK}єjrBqLJxݢtMsPz(eRuQO ?mюabV1.m}-[)5m GR C(P*JjP| GR~NB.AWϡ:f(/ESji?nRC)8/ESj&i?n2RVťBQ0VR{6J2rf ; Z!yzOvG?!^4e5ur^Bm(xb*85 Iо!F%}y*Y޾'`byfyldF¦IO$&o)&zI|&rT?7wVaoʓQAv4*p4jRl'|l;Cmۚi{wH_eq2ŗx \6&d=xl,Ovr~ŖlZ-~Hrd Ѹ.֏,֋UnW;K'Rభ[ƶgKTfw7?DAUz[/ppe`Ϭs.C {Mem+R^S>wm!X@`Z9n\ ݉Y5:dPmG[}S?zdr<5}}?ٻ2@wdI#PJؒY x(.UQjP24A <'6$\)1<񍂷 WKd!Ug1E"( ";YAc3n *)2H< P8֎x[tᢉXI)7L} i~|q~ӢRI v+~dH_)"|%g2O4tQ5]S-tUfeՀߟL~ʓ| weoO>}'nrb83̨ɍ}e)6_skrVOҩ̈́>-9[+Nic.lcG $k{w U- iξ9DN|8;~Xr6Z5Nz{[ww7W׷0,F삖Ԛ VB{#rOb3!̕^t]KiEα^nRu_ؼ;Ց9uf(`L@XZC;{*zz1뾝XHqNVYw ݜaߝjTp+;vZ#VrWc̐u+NulL5`)V+Oݧҷ6n*<+ۗNٽ}<׿nx{Пǃ+b[_O}?t}ǂ88x~a~01ej83e(6O)0_q2@Ω|r MO+wD)kqOu馌>֕)K;Hix߆n3[+wD)qI7 {n]yttۧ!uOOn}}Mi*0UpGѣuƏ`)I =|;{_K*j^ڐzKMLl1wZ/{9/& ](+ TxI$Iܔ)IJ"/&d*g>~vW Z3"#~{Uȼ> PNik^ޭSXDwH^+yAa>H8.i=J=~&Qu+w. }ECfQ%t#9(&ap㯈/Z\,ES78F?7~s1q-KڌN4-&}3"/*k]|u/HP2![|f/>i4BPeGYV)zˊ$\^dPBYsa$xOC6GN2z?mzzpƑ(*x`$ v1n'LNp@iW&`e8gW2=I!T;h蔀0K铈YJw1$V JkD9印Dcpx #GB܄B3"pߺxüf9krnAhm0,/$T LVz*v]TnA3mnWy ])UҬ3!SNc-IRI`JWt*bY9nQpwuTRm[+ܷŸVlvrƆN^7禯b?t]U߮2iU~d"ZD$]LBs!LtǬJ"RWdLSorN-7Fh'.hovڪu7^N5Qcy.ŀM2f>2!sGwD7n$.oxAR`,.Ȩ=auKy" x90mm03|`OzxKK_ZaF jc0nDr7Gw;Il:䑠0 ABdS?y~Dml`q1 {X Y:K|Ƕ4t5#/qACE@ :P_m52(s$̜ 06n1l51j8inkLLƍJwïʭ—ǩ>Q!I _%cR,,tnU SPgs']aY.G _EH-v#SbpyjX 8"2T.2RqP-qڒ>$ N@1" Qj* ;]:* R^jZfHp!g>Nϝs栰 Jzj4ϵexeb#\Y [S}YJioBE*Ss%sJbW\0~S/3[[J Dܣ3e HK] 1f~ͻ0%F#QI{ n~={ů;Q9VPqЛy?v1 24/YϜ ?@ x{S+e6ȃJ"F%#G=I5xu:2RaF iwPiz0h*(Z Z? V O>' S{@bHnsv.AZ e@ byK-XPنwF՚SQg>pH[L(xGrwkF_J WJiU4E. zA&=/S>\#O38n_IJOr}|\gw'2GӏЊ8sFZi\kEp$T!<3䷋eJJq}TLt}$}|hG$@ɁGm3hB1b<| 6s9w;+8ϳ zl8٩X~#WBլNtpmJHY{Ba+-<4>b}X iHfFYԅ7+{1:>þ%JWz6G-xm dP|6rv:9FTZԝA_#ⵅtT9R"Nt\vb$L)r ^מFu/Z?yHPz!V/-ƽSyj[ /!#EihВ㎂Qgqeb S}#'޹;]t:D3+,ͨJ=g:{-4锹]wZ^; .k}kLWL Up( U8<کյƸ@Yql&T&[\k*TgSѯ{OӇz]4jJ4*N_#NFӊ͔FΌ"o# {׽0B0ef!1ml^m5=Q?ْѪ7j^[>c:~|oaN&'/e03W)Ϫ鹩([eg`0̲LLsFLPqV6y=WRH}0;}wW_[W_<#.$G(5 ynws⊏Yuk0Q$uhT vZ OwќVN@`NֵT9#lR@ |w)%1R}&KDk, ;4kբ<ڀ^y6ay~W2*wlD  l7͵罡H&:!2sr*|zM(Zrj)[Yh$['҆1 Nc-8Wו2ށ M݅J*eV;+tlŻMg"cGqE]Q:H u|T.W2׵R霃U+l4d:*3Pֲ%PA`N CA }\Acm4»eϫK^@K؀zOs*c6=L(}gQ}L50L9R>eu?=35iU6RԷj:6P$v JTF\Yl`XG+hLe(Џ\XlEbnL%$.!D!NS$uɃ6J.HEbgHh] +02{H]N$^ؙ9 (41zɉ"UAsȌ4S0ACW-|x3`0KQόhyhIvwq"Z^ъLZ)_8H(U`X ^_k|L4J|MzԳ~vw,fC {-x}m3>릌V"wj z׽Ҋ) Htȫ_^xBw: fqX`_jqE& ׳~Lܗg{ɏ{Yv>t죎)d?ڋd/m oN>Wkf8*:)q!|o{GRZ3uw=)=WUf*[/LS7N8cT-S7Zǝj:"[p>r㒹Cr%eݡ@aF{F#5%E_O_'1DCԍWP7V_A͌d*cؚ~k~rpS0ʜvZ OE@ס_bÙV|N}ٯA~ gɣc a4h}!8ٳonu[Gܧ*r7i5}J3r E?\}|w?/:y{z1E rG/co?yGU)c,m#nzX|I' 4}[=,4`Xł ́ ʁ2g{cUU"&VTR+ǭaw^=,"SŔ0J"*!eGn}UQd%4y`׀aR!UN+,57|P*:~>,Sl_L]~  txx r&m-6'lV;y$ D*ō4Wxu/dU\&4uj0%b@١Ev<+TTyGkYɮjC֟:ЪA*3/ȔTi4^dY`fK vY)o#FT$h.5rYC\JfEqS0up'`6{ztsfTz 0NL nqnf"󦙠)JGr|3L|e G_,m QHj [W):̠`R(1䭦¦E <_J=[&5!AyHiUWhxoP^B\4g ckH>Γc9o:.UwLJ;JxmOuJ8t.p=ҝv h缽.[;@:Th<0C~# nld0Jy1lF>O_k~<ά)T tPbѤyޜ/}ul&q-{[f,/zw(m3W|k#cwj󥠘dk9fЙ@C \\^K"(;}젍H\0s]6Vx%h"]r&H_׆1Cf]UԠbةF9at 8[*ŷBZo+juUkU*n,x MdJjX5qu \9׃ȕ9i}VL5}84IDiQ4Z[]]Glc|ss&_ŭFP ))}CYj/pdLF*#Ń-9(X qE^QRnI*GDY]I&0s u */{%EA5;vV,T@n&n;,?v]p|,ƻ YAN|ã\-fj웉Edg~"Ll~yg'fO*ٔ/Q- <2/SwRl*t:_Α}އCr fj9t$crqh8xy缡k{ҝ.,hwLgy'iw&W 9&%Q}L5 snR(dU?Xݸ1|ף5c϶ʙsTS$9;$K|s{|%RZ :D6Me&~n>凡YY\e&!ΕSs3Y*Vt䈵W ѥ2l%ёzuƓ:#Xg[0+ZĐRƣ;e}7]1`iJn` 8nm ,m?{aq!9 ^ t)yprR9ˬr^JVa˟,J}#N٠⼕W(C9X:*Pֲ%PA`N C< }T M-Aւ̾:n@fQ[iE%1'['V".^'Ld)ue aYTqiݫ-љ,j!^rv}yd/psrͺX%%N&OMaUn\}zՃkkozr0ד惾cĺqFa GuQbF&r`Y͵nChwt {+ k֭/rT%mQ[Z!4;Wѭu)eRϏesЁ~iZ}ԩj鹩LMn[HfarZ$7̋{[0v`(C@q(Í 1 %l1۫_Lr6E.y`!hj "i^Ǎb8qa(0dy-I}amk e&4ɮuAf{ݫXeb jdyJhQ{ (jC`vBLXׇJRLu|m翼Wb/ߟ~/R(oWp_r~B^~IW92~vz9&~$۹D5e }'mjh<|{rO0L.4"^yM# q29oo ǼuxeOǾ/ic_di@w:oƦ`L-qLj?Hs]t4W):̉xcLw m ƨ8ƺ}Cgݦnsh\E'锡gn'aݦ`LcۯƔìm u wsV1=X*V@LDvtq[~[nw}嗢8뿽/%cRܦ7oxş? hS%4M7cr갋>/y?DN_1w@v72P)-wmQ uRjxP>|E0"zéh{g]xL}Hכ핼&  L2Lk(U)*RKɴ"TloLj4#otQC?؛n/_jVޮo%3l/jxb|Fkò:㩒^v+\/7BJSk;TF\Y%BTtFJtS6Y! 8ʒ8U`\S5ܣ_6|詩[eb) W1jIZS_ο 2YFq([_|mc{JĆl:o Ƭx]]=vzPz D'dscO? X xy12-' F"TQ&"%s3nV%8|_Ñ&Yxnyq~QTnNy ,W|kL)c땙קl∁h͡M.Zqd>buMĐUibAs;*!o$0Hh$׽Rsՠhhń*R*brY!B9׋n( b蔵)UYf?yF"jOxfjn3O^]]L/%MNW2 !y˩:a[_})4DH ]Hƨ5'f~uDSքMu\#U\Ur&⤋ūLAϽ8T<.pM$FA Ȕ3  MMdjL5P{62.>ǴS)rռ+dd.N^~ﰽ #OqLJ~X͂"ϋiS;aAo Ζr=D&&F' &/V?Zj8ƃ7$:,3+`r" m.=,4sԺ ~V$aG*$ڈDJx%֠^^|n\k$ݢ2mhAq$O JμzӧhE0Bh3c۪F|UJ-_v6,j5w&x<zb'7![ML5g $@$ҒzϏ/YQL-bk{PT+aG<rN~U3cZrr=jiYZQJЈG$ѻFЃ%!dcg5N5;zVG>yTyE l[EQިpaIh17[9eov)e#ON$vn; Sq074$h nAgy2wa2/eGń$TEJ'<<攅Pj5z v3̫!,+h>ܧpu~!'#Z=kAŢB ϼo7k|<^ xovjd\Q%yIVt$"+y{L!qOdtQ?{s Q^EZ^uQ\^A[2vJLaTl] =QZ"pwH`S)TZ1qo(4¾K X'RA1:@@zlgFk4v9\ ^$!==+EWUqM/|ܳ?zcZOVSq9ܧkvD-Qb_!hЙMs^Cn*% }HAB.L87# &fۘ"ӣIL_uUuM_bPjW_nS-50i~4 1d ks,oܐj_EvsA*=+в:%32͙qnި# Z*Ĩr25Ձ_d٠mחhP-bC" F-;[fRN :pf]׍ٽpS!{!_zm3hjʚ` TvWуqs}"FK(ƬjTH T%P/>KTU_0p|nN~wJ8x#.(ν KD@&!'z82N濋S~]BDdUBToTkrW9ӣ2sg忹Suzq 8,062VjqE4.xϝAOdPjW]5Ԩ &U021P]* ~%_m/ܫM$4+b)R WꡚUtjȳyw W[ _@n)sb0&hMdƼ, nLӱXJ OmxaGsɉz5M|su_HER޲uz2~ǖR)ϐ}.b+}:x MP u5j+10xgNT=_|[ B@?+ԏS^F?a"S6)ۍǥX $FVw 5x@ۻAomLLz.6o|(Tvn8cQPD%j )bEfS-[%枴+tI/A0L*br{OeD FV9'S(ll,.7mvx=ׅe쾉:L(̬U+jj%FwgV/uD_{ߏ&$DU*RT0b.j6 =jzo|h2P2U蜨~ tj\ie,mϩYBn6^B K MX׏u~f炏5.Fx~}y,)u~F,_Bo7qh]択MTŖS߄51{Il.\ $վd^j6X0υ;3']`xw6|"f * \ػ޶dW>ŀ} L$\^-J\dZ[uHIG EdLK$𰿪kynDjhOIBWjSaQy7H2ޢ/g2HYtpJWGe: I9IH!W/$|$d6|t$DM5Fƽ_ @{L%dYuPzݺ!1%䶫kA7"=^cddzz g8>f4s _&Fcc`A7OiK"x >~^tc *1x:;dGΊwGϼl]P&Cl4 L2.wn>W582G dj!jΛ褺U Ȫ;iUq#pw?f]_ )̓ѐ3ZC?ȓKȀ؃wsS:dҪ3~-ޯƹ5^^MO?i}K+ck=:˾?=Öil=~>ЃQsԞU'#HFBP=Q7k]ߞ.INU"[Fw$,X)т h&&y>kw.\C]_4!gK_w>0?.GVgxi u U qQ%r+ъjݜZ7WA+q]}_;dV?>_xٱF ώR^Y!RLx(kЦ$Y|L/3˚Oc& Okp8JwG[33WzCb ID>RJ-4 8 k-ң/[6W/n&y"k#D!BHZٰ%|GO>.J-v(*)i@jĚ m 0D+ "65JwVlQ [lqo>Re\ Òz8/, @+,=}ڐl]]ƒ .Yr~E7mDAk@ܪGWios<G )5*mdl=2jm~,N1}TRVF%2s)<"d*%mo#ĿIb35#u[0UaXBԺuv?\Ԛ`/F{O0Y˷>kz]O+=X3VO4Ot!BtƯyH?!D !A?ןNxs\B{Jl~d}9Q'kh*J%YŹ2LieE.7E`J*Ho~mmSD+sSw/Ԉ,G! #lT` eR)TAJg2FzBGb|C@2hPh^e6gXBp"ۜ(r]Om}-r.;b}M__iuY`@ YbRbM=}x[݋KZ[r)~^^.ɕIiv2sGG~*jP?o_ B3q bqsC ajt&eDM4:TR՞ ٪#4m' %rr rTe!8^ KqO,hK=؍Vq`C_}!ƣUx{SX2; B0p ؄:wD.Ğ bCRY7XoI}2a[d}^NUpjLJ"SG;QD 2yT, ޴yj;W(ZV3h~;\m](lP7CmSj؛~8]/b{p>*cU~Fld_j9^ vwXs5 ?ʝ;-[C 5̾%Il T-ϵ.8fS]ۮRz("uYIV9I+F-zӺ2t[S?Vˋ`Lb*Pbℕr R9\ťہ+rnh"+LZPMw7znM6Gk9RFT $f~36 l3v}?=떬t!RR"WLZQѬu ך]6إ9T7Ue0D1ѮPP_X5ZU:KnΎ*JP*EҔ|({+td> FRF&+"G 떉yq8[;r~֊Bgm~h8ME5M54NSKQ)k먌,h4 ;hԲP|#\!Eg$<@JKEm`+m~[CPq0H˵6$ )s!hE\LRF箐*)/<\Mx<&ģ_>r͟\ph$-^:&_GdԶmrf֧E&>}bA `2j1SOc]#e٫=95FQ[Zdr9-̡?=wsP 6܍:3aŌ筊 $KeDs0)gLz|s^%?C Vo/L t;:OQRi_O{̤x">jﴼ @Kk*1Gm#?}Ԛa ۻ32N BfXaiUF8gM1F+fק9^ApHWܚE^],n}aUF{ Tay鍎r.ɢ/xS }S wǍZKؐoMUkct!5X953߱"7L˵[ĕg1d$#Ͱ5(5aoA _y56=tY*X3 VҚ,]6ӿZעKR4chK;R)t6j[u`>Tf,UؗwF}tL֎X,Ǵ_|NĥG}t\o6낣.8 iw\4ci5ja`Y*t3j% ӝ`X:7KkRUeJ`bXZKsRpUWڅ(pU>jx!{o6[7UzQ/f6q#җݽ:_UnmmrqqKR* D;[ט!%MpC2f[n4O7fT|$QKK$́,TMx(~Ћ%/zL"Q_"!ZVq𿩿=,@bfӄFeo3M(iL&;?UP S,ge,$˱3-q 45DTܶ$:D"yj:L(t25bপHq `w&k]`5԰9Ċhs <Ёs,< 1&:m;oǰԋ7C7%/.//ppI#cX _7eh^!eg? `H- |kǃsǣ!|iհNGI|OoRs%.^Yg{Q;Wp1X wYYtQ=ݮm4NHMzl$IW(g*N|z],[},Q#a2aV۰뼼$qڕy-הQvSlN2+I(-l8%è4h¢P) ;ޜvkHxO&jK`SpM Y/*XC1sDז+)'Rmʝ!Nzs/ײ^2"X[z9i1172ýͩU Dy@ˑ,Вqj io0{#2?5l(. mU&yWX!8* P)Ra3Ma5/bJcpM8'y%dɾ˜blt=V;nl\y^8)O_7/?Ǿ]8/{g"Y.}+t&eW)?=?wf2=LhƲ@JyPMaݻ{W&3,Jcy546}9 aq)?BKTC@sap|s"IT%Ş2gVOXKgtolvfX??[z 6+Ť %J\u,vbm }D kXxN=?I JVe9gY\B* J./SO xc?A(o y4&hMțX#g`B#;gؠpFhϵג%Xѫ fW[YEWx!q@4-ٛ~ˍwVeW66bkв(rq) bq&yK>`ϔ,ϵI\l6\D6 o]͋YI\1+o@F1p@*lC/8pOW#c &+do*͞#3| bWⷳv'S)!*96+ήL()ؙ~.0㪻߇dk-XkYkj1X(YNΰΑUY5U]L pJ,&9?`g+~-Eq !;/]=Ily1o9q"Z9_t*[ !|8HS. 'Rr%dLS¢ -SI 5VJlϐй7J0CcU ,E8D?U}* -B)Kg-gҒy.X.5 ^PF.,&F5-LVx(bP8w1D"2l"-Q,NBX0+7`!Rh-\w4 ʩ6uH낍>^nF( qO4צcZ">JVBqĖ+s}Fugml}6Y yyֶ y ;ډ9՜6D-e}F10wsÉᰅ}΋BcʉX߅G1.`D=0@֛^Bo$puÁZn(ҁv05 X)1ze}B xRq Aly< FL)rC?X=?R/Dl ̈r,@Qr"dU g%HLw08č ?vv@nK>^gQpZm؏cl;3/\8߾=o@½ެQ⊃ &?I<@sQb/Q}L5Q]vxES/Ţn, O( %Y\_sUYbV82fr  p'rG趕g{C| o?ۡfp:iUD24&MK`'*2~ex.4EfdnѠ2?W_I?ZoU ۟/*_lKWM[YM]QRƥĤȔ@H$_@WL+O俹JZĆ1\FLQM:_tAQ&j)XZ΄^\R<3H^WL.lP^31gnƃl{K v5˷$_YiWW ";swZr+q&T zDN,A Ptw|Á<7!+T(NOyk:%ΒOoy2~U ी߂/nbg;z:pf핞}`~tS4|GEyU܍@l AQ;=Qv3. ~R/9gGq{sgLO#QV-it*X`ݚ Euu;EmݚW&Z64 W&:1>z_~ٺIukʃi:ptgּeukCCpY2ʥW9Л=C1= G5Pիˏf|9䗳W_<)9{qcNoenRk%CҴ z=zr-e:Ow@tKVZpL&8^?zpl=@Qw`K6W9]#oH4`+!C[AUH|"m_'(fi1u#(v78pn5$ʙ{xu9Cc1shMp{5f:!W(N<ǍAYGs2Y6drk̂ *5RO/j1%(-=XxO>jc :jpďƟcPHY(夏euS9ܯ|釅A?=!=^)i f*c,gdD"d:<)׾ԡGzU<6lr BM:/mP8|{Wd{s&yUd&R7U~}jQ㺁v0%iqK6`Jj h#87:&Tu]cLٱY8R\?$#7rʃ0IV~no v&ʃ3`!IĹVR*F YzVLY/& Cc||5V˗$7c`RvB)]8)R"l^J0m^ºB;"&Ǣ# Fs#WҎnV"F.o[+ЛrFǿkd,2>t[XVagVH+SA?v^ς1 f}ؖՏ嚰\iv$0Pf̳$ak3C%=̰xxqW&a?!,wuw=-!,kH3N5La徸7uW#ݮWeQہ 5؃Z\inH/n;qЩP^0v,.=syrm$U6Ϙ@yX2#fR'g%g_V̦9)fíX<649Rt8Kc4=4iNNyJ~BJt1d=5'ն92H]n˰])G&9sd KGZ|T0,H!sL %VWVL0saY:Y(-#B—@)#lB%^THx@#ͺ$,EV0̩TVSAm4L"SF™Yҹ|ļ 驘M90w}Jvvܻ{-S D#$kMݕcbi"LmzeB.a|FÕTȣR)&㳖|+gAK=:ї^:|R}\\QVpZ2T];}SҴṠ(җ^-ZzZJiRET}Hmvx  `b="rSMn2"PNe"reLQiN1 [DQ\C\> &!"i }^_BL(z 1 h&L=Mgy240d"I)>TptV,S0e_npYY %:oրtET. *.̊[~ yዌ12hv<0\IF#ɔ#83V])g+QX?4T(p"# |S27*'T1.oX 3s h:;I?tPrS斦(%f녤v;w0; 6Mv_Isr7m4nl!)D<7ŇtC9v\fskQ`#<l4Bճ$I&aUfՓel_YYcLzv=:y@4҉^yz톝Ϋ)Y]GR܏'g,1\'TAP5$bw^&j^m6)P{ Չ5[V)<6IzN ,x@ 1(^;<Ǜ 8D?w?XC(Q7(u(P?{cڳOs}$+ź2ָΕeR O'/.gE%0C gV&;x(NB7ytv=m>N\ᚖ ``ūE*NW߾T2X` 9i1{-1.) {q bىH>RɀBG-D2&ff6d$R҇9!ϗaVoa3UQ-}4MX TE^Tl8)'PdFoJԚ2 ـ@(5˞O Z dL]ٯGMI π݉vʀ=7໤" (>DDMTM>V/E -)E 2zQ ll:|1Ac+ڱBHsƜJJh 򁿭G2 n6}ADoM&EG>RIKF O<;j:W30#Q˹ *eTH"|hdkY,RJu8[ɑUVbUƪYĆppe=+Bwج;pm05*w#vr{8JXmADb.[>h%TRd2 )8L5MztC.=y*NhċQ=yЮZHWl>8?NZ?^<ڍzu1V/w,{PS-U ""ꖂ3J^N6Z: %iP<:G!TT#%@*UJVmnQ}=/Zl΃zKd羓ܻ(.{X'jD R)D')ZE-޹bQ*EF~ۣwG=:PIN%r?4,-JB6*Ƞ`i-5S\k`l:`}>/Wv] (|")+[h[]+U)V[2N>}GCCq-IS!+.LR,m ;L6Q**h<!]L+0ʧg8)̭D(sUIY=Vbf,\P7nt)H j;dd {2UO F;{KNGYw[RGz{rzVe>NW u-%ۼ-.]C}n>}w{ZuWEy*_]@ZeQokvϞ_:f$B*rZ,W7JWggmmI+w_k}5Gga,c2䔴?ԚX\W9Հm1:?fKy?>zVސwk`§;3*d^V;-Bx*(Z9VTI `21&$˸tPYeeӅ2=[^ |B Eޱ0)Qoܳ a+;#A3 ݮz<u8tUiy Ug]OX|@0 tuUb?aU3'Q$?Z#wדldIQNl9Mn?;vxUTd;W c+d lar՛v"FI d%a'[޾֢ǽ} vZ}8.=%DQf5nh_wR((uKQ!8oKMYCR9f!?X>;`){~9OTQUJ%ʦEdxw6L7{]nhCgꂭVֶy^iN )JYAJd U8yLCd-iB0RTT"=a LF;WZ7ki\߽ԏd^蚳c@s-{Ny|_3&!ǎZAAj^$& [u1&홈%A'Ѱy[<xک|e#dR _&ڽW,/w+`^aB6z3FAEJO ٰvtP-xsQ`ߖmb-&łocߏof?Vū_;sM[5!Z(:G`x~rN~YiS͌u&\-zS a|zOqT5߬W,?])dai.F?,=[|e4wu3nZwh*;vBmPWwLbP㏝ gY*oEN Z:7%+ c)&\*$|h;<%gMa4y137FښuiqrzL(Kf\,˫յu`_\.V;m>x;sxVG푳TrDkl6?R첀v$4~@(ӟ_ Nj}=x7_G^ظoiʖSՂM<$o\`Pw}Upy b(^m;H Ӓʹxd5Y++Z%9E-iF3{@Of,}nڎ}Z4xdݚNPtְnΝ4JS`kmݵp7 @mOj vtsɂ;ޜuiBVV8#kD-[# ֞=yı.Zض$ LbKCFV!j i'J ZNgmxwҍ]]7Sn46:]֍BU i ^xHZ EIG-.s#r*`ȦJnuM{u&Z5M}֊SM_WoKɪ:ge-igWdr~U)HCOw$:=rnEq:{s#ٻ6dW,{ŀ[Xͱ/}SBRrSMRP" I_1ɹt}U]]U]]5Ё luD)Xq'psR[fq~sMn׈Fo5> ' / K+> ?,lfdJ劤Ζ(9De;(Q!pI&)XyiwEǗrs E\JiXYfgqX=CVj6QBLM m%0|lC?_z5%$\ySp/yҌ;\U -mb k{KsY oKz"t&/YnA+ Jts˪}ۮUŒzZe1$qaK BQ\=U=JDcS*e.ZHg]R%Of]|o}H\Wm/{;Nj-5d ON߀D~K*x{3;~ɤeBp_Zvw߳wx_Oo vtW6▦u2y_̓'ަ)X߁/_ Qƺ6V|Ey0v=*튵SBͭ||xWKb&hctp33198 i7}pYQFNpZJMSskO?D $MT2&'pQBhHI \K i'>$DSJz;Fui*kq+GM9.'Lm mzКCfW .sV񘢋 -X+-:Mc˫h+6ZQ_Ul[6!Z 0;ڂ5),*DlnݚU Po2518/mwga%"?(Qb[c!{H殮unkGuԱ%kWI7N+6gQ3%|eEfKUo5|_ﮐLhU&f= g?ٖQ)~ /h3A/$_I4VL#<^Jx??&yiX0rܭtXF@sq8ryP% ELU,S]h7s;vkA4}GvD x*[Jڭ EL)s #ݚbP":MQGK ~0֘Rֆ|"H + הijN;XQ3ʼni9,`+ r  'J/8LQ(,)tK eE`.p?'#ټVԀ'yXJWA5-m{` lx1ZdQDP&[ZF҈)r`6)w4XR Ra'aK*e5KZV^'-+Jcb1L!ʫ(6I`a'?\aoSD5Uܘc+ WJ%0Li'ݦ߀0*Ә{:&/9qCɍَ% { N\`DE8{񄠥{؄+i951)(VB٧ SM\aTlq$uՑ|gƗ|D\x8?^w~̶sk6QS{-uTbl#,F%EՔa3`V mqNpP|8Ns-??e{/mZŧsb|el[^|e8_8V\>w荜yozE);b7tg<c5ye -1eX} -:qV@t]jQTʙwT>^nU-UU⭵^=;+R6_\ry9ykCZ4 V6_up~ۖW3э+n/.^teCjC! S=׸Z2vo87i=4q|f{ ÂW> D٭w!Bcګa"'stM=-N|:4 <1("XsAn&JXa7|Xgp:3s][ہ DGW \|T:34u1BH#Ag8/u@+8JL!oE?xZH8.Fa  ƀÚ-A yt09AGp?1 T, 2X nzŰ;Cn\nkXLR t|ISdA(Jm ә;7@3j_<oJ(g<7'XIIeN&n: 1=3'_=:_J5L;ɍ@H #[H;3u'~noɗR&|pTdKQpD6^RZgwg<*h$̯]H ^>YBBGdHl?GR;kxB"ؽ sI&]{CМ#@`/ VyuF1 os']yLf T3'l>k9/؛ь>I;FgP3kQqStp/HgNb+g=՝ܧ\'\=zK/0eB˶Z-2Sp~ټ檨|]]o#7+_..6wyX$pHS.~5mEk[ܖem3XnuTE6yK_XG2Y8:hR JzsRh9O7c x&c3ʒ_gbRS">KRV?Hk#ByjA{ƾJ2-܏_xu9C$z4N3k=HjMgЗp![^^a!r~j&4GȊZ̈|/$\zxs70=uWR%V{|X'RR"&~I~cW@wC"`U_*aKoɩ^@f2.{,oPvȤ4NE/H5gAqIӎZD@SƱb"U\thkALpˆ/?m_a$|ͫ9@rT$qsBihٶfϸʻϒ5&0QdpGi77n=zd#g}yPm_y" xsf_ܾO]R[)f ^}נ3k_h cZe=+VځZR=d~Q_>HC5sߥ\wT}n?Nxͮ9Q_<+]2(?߂kt6LP.iy3T#Up3or87=6c[g1HQI 7Z"o%i*Zdicɲr6^ݤqnk=kԋ^99 Niܻ$d,D4:!32 mw`=CU]K8Mfn×Iys~ShyS^" S6W:'^\/^M*YDal/{ o:"U,o"H=hj.+7G[FGT|g斫'Ms1x}iྶ!"-eۣ7VeS(/-h6镙T5v7ͪ1`-iШGܱ9ܡi1MB7i{ߺ4FG@,իBmVzTָهc{P Z9DK3Y)w϶JPO'TXc}FYhtqSu3zhQ͙ڣ w:~͓ݯ&2쐼u{h[O=mŅ ͘RT 5=vǟ=MF7$sA iPO:;8yy`6];_U LW߭}Hu)kv|鿖׫grNC[c&ǠX] F%Eѓ|Xcd'kc\sz~{O L`X?8%Iꨭ'.0ܹKԙDHOؗh ;wFaQed+59cɇ:J "taYb`;nz3FrXRwܽ>?[bȼ*h\, T~JYA9mtio ĕCXMxpC ah92W! LÒHO⇓_'S ֈ3*\!icobCsy]65!útX[%&c<X ,:u bؕH Y21{Aa!_ UFBFd!N'g ;O)sOpMX<wlҎU?td̓%LlRQVhr |ÄE-Aañ;M,Ӂ%,rm02AblZQP>мXS$Kz6I(ͳ'*\81NY0caNʷs]qX?\4p{\-7.,o.p$݃<4J[Sˠ N~4/o%o-[CqB{K\8ߚTo. 8 oJj@PXkz =kgnRGWxreA h+ف}+uLW+!k'# "@יAB?q(j5pD#7o;k9UL ~^e Aϟ?|: !L [X&->S *[w/ADǏjM6B$3! ^xCYXku_+f n_Γ;-zLt AjD1Qs^4Af B1ÚM2TSt3HL%/g&jO@S;!L"`%-r J@ޟ? hsDh. OKq,u n+Cyc;$qh cVJih3-وham1f/w g4`CQE-G%De{-P'9u9[# )hgB5 FeFSF̆kd]G!^0 ^ 16R%_!ȗ8jpYC؆cBы!F~9!8{qY8ʎix]E>D4D-<"h2FDte,ҽWQէPx7тBG޵~?޵ ǻ_I  ,IK _SaLG&Ę1ifՄww[U 8ܾ"r;!uznO~G{ZKxH,2e{+R`8[R=xۏ IOG3qTg#%*XM `Bq$hf-ݩ+zc[xn|D%N'6QI 'c%LSl*5?t=EpjTq4G5D2~hzёps9\F;-H#\QjctpN>Lr x(ʞJ$>F1x0Z`aZkyɞ~M`^XEZ Zn߲3sOkWYZ{7W^Y/ӏ<-=Mv"..W՛IџG'czL$ϣGqfUn^Qke\|,, _/oF]w-0[ઘOѕ/>\jl&O2IY1&my߽w4'69z < ]!^p r IV3*BrPXaoULO퇧VFkOO7B]B5|K#|TR * o@9mģ5/8$LjzV^ Tֱ"*6kˋ mPSBC`P4,5wǔ%kF$'0@2,eƘGn|OS8H@?>MgBYZ"τ{}mpyﴒ)3}NFn4LoVZE[t/(zNujgvدuz.ÏM,pŢ7`[j\ĢwVb'Ng˼fiNWT" #JЎF/X~GӶyG88\PHfwZF4ly|Ll0I=fh aLjj҈ܪyNz li.2\P8+bHx"fnʆǿucѦJd<12c2Hjc0g0!HARZsS!kWtP$ 廲bKHM}r"$-W:[۝>rv:XPQVCVT=/x~r&>^cBl]KCRކoy{r. \-7Wqˌ1Q ;nLRcK0ȝ4>::I듎 /pxC3ǯz氰h?-K v fw0JzkrXmd !'@%9fL8ŮuCpzLl// nlRC+gur\˩;ϸmrFhw4BۚW^~lBtpP3`g ީ+ҍ`o#/>1lHRt{FBY ?>#7^_Qb=='y wk<Բ4aƆ^LL[V*.i9r>XOH G%˶׋?],Nb=洼}X`Ybr،?:b֋UöQQomolVa*RɵB/>(ncVhOYo9a}?Qx<$l]',/1?NWϙYH{,vkf+m"Syߕmި \'Ov6QKBT-Z09XߨqhKu],dai6C-%YJacD:K{jlk.-RctMUii-UKTSI.iiјzz*RnTW2N,m9}i ͪ%Z\ lYgiOg4?7var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005611170615140112004017670 0ustar rootrootFeb 02 10:38:39 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 10:38:39 crc restorecon[4664]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 10:38:39 crc restorecon[4664]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 10:38:40 crc kubenswrapper[4782]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.589393 4782 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594806 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594825 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594830 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594835 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594839 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594844 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594848 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594854 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594860 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594865 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594875 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594883 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594888 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594894 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594898 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594901 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594905 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594908 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594912 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594917 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594922 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594926 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594930 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594935 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594939 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594951 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594956 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594961 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594965 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594969 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594973 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594977 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594981 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594985 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594989 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594993 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.594996 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595001 4782 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595006 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595010 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595014 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595017 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595021 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595025 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595028 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595032 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595037 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595044 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595052 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595058 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595064 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595069 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595074 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595079 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595083 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595088 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595092 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595097 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595101 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595104 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595108 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595111 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595115 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595118 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595123 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595128 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595132 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595135 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595139 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595142 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.595146 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595238 4782 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595247 4782 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595257 4782 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595263 4782 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595269 4782 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595273 4782 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595280 4782 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595286 4782 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595291 4782 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595295 4782 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595300 4782 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595305 4782 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595309 4782 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595313 4782 flags.go:64] FLAG: --cgroup-root="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595317 4782 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595322 4782 flags.go:64] FLAG: --client-ca-file="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595326 4782 flags.go:64] FLAG: --cloud-config="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595330 4782 flags.go:64] FLAG: --cloud-provider="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595334 4782 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595340 4782 flags.go:64] FLAG: --cluster-domain="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595344 4782 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595348 4782 flags.go:64] FLAG: --config-dir="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595352 4782 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595358 4782 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595364 4782 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595369 4782 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595376 4782 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595381 4782 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595385 4782 flags.go:64] FLAG: --contention-profiling="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595389 4782 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595394 4782 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595399 4782 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595404 4782 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595409 4782 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595414 4782 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595418 4782 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595424 4782 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595429 4782 flags.go:64] FLAG: --enable-server="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595434 4782 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595441 4782 flags.go:64] FLAG: --event-burst="100" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595445 4782 flags.go:64] FLAG: --event-qps="50" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595450 4782 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595455 4782 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595461 4782 flags.go:64] FLAG: --eviction-hard="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595472 4782 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595477 4782 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595482 4782 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595486 4782 flags.go:64] FLAG: --eviction-soft="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595491 4782 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595495 4782 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595500 4782 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595505 4782 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595509 4782 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595513 4782 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595518 4782 flags.go:64] FLAG: --feature-gates="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595524 4782 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595528 4782 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595533 4782 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595538 4782 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595543 4782 flags.go:64] FLAG: --healthz-port="10248" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595548 4782 flags.go:64] FLAG: --help="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595552 4782 flags.go:64] FLAG: --hostname-override="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595557 4782 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595564 4782 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595569 4782 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595574 4782 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595579 4782 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595583 4782 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595588 4782 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595592 4782 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595596 4782 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595601 4782 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595606 4782 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595610 4782 flags.go:64] FLAG: --kube-reserved="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595614 4782 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595619 4782 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595623 4782 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595627 4782 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595632 4782 flags.go:64] FLAG: --lock-file="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595653 4782 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595658 4782 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595663 4782 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595670 4782 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595676 4782 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595682 4782 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595695 4782 flags.go:64] FLAG: --logging-format="text" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595702 4782 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595708 4782 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595714 4782 flags.go:64] FLAG: --manifest-url="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595719 4782 flags.go:64] FLAG: --manifest-url-header="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595727 4782 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595732 4782 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595739 4782 flags.go:64] FLAG: --max-pods="110" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595745 4782 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595750 4782 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595756 4782 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595761 4782 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595767 4782 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595772 4782 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595777 4782 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595788 4782 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595792 4782 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595797 4782 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595802 4782 flags.go:64] FLAG: --pod-cidr="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595806 4782 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595813 4782 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595818 4782 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595822 4782 flags.go:64] FLAG: --pods-per-core="0" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595827 4782 flags.go:64] FLAG: --port="10250" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595831 4782 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595836 4782 flags.go:64] FLAG: --provider-id="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595840 4782 flags.go:64] FLAG: --qos-reserved="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595844 4782 flags.go:64] FLAG: --read-only-port="10255" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595849 4782 flags.go:64] FLAG: --register-node="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595853 4782 flags.go:64] FLAG: --register-schedulable="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595860 4782 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595877 4782 flags.go:64] FLAG: --registry-burst="10" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595884 4782 flags.go:64] FLAG: --registry-qps="5" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595891 4782 flags.go:64] FLAG: --reserved-cpus="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595896 4782 flags.go:64] FLAG: --reserved-memory="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595904 4782 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595908 4782 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595913 4782 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595917 4782 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595923 4782 flags.go:64] FLAG: --runonce="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595927 4782 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595932 4782 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595937 4782 flags.go:64] FLAG: --seccomp-default="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595941 4782 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595946 4782 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595951 4782 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595956 4782 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595961 4782 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595965 4782 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595969 4782 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595974 4782 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595978 4782 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595983 4782 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595987 4782 flags.go:64] FLAG: --system-cgroups="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595991 4782 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.595998 4782 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596003 4782 flags.go:64] FLAG: --tls-cert-file="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596007 4782 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596015 4782 flags.go:64] FLAG: --tls-min-version="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596019 4782 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596023 4782 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596027 4782 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596032 4782 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596037 4782 flags.go:64] FLAG: --v="2" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596044 4782 flags.go:64] FLAG: --version="false" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596050 4782 flags.go:64] FLAG: --vmodule="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596057 4782 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596061 4782 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596162 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596166 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596171 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596175 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596179 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596183 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596186 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596190 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596193 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596197 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596201 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596204 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596208 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596212 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596216 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596219 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596223 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596226 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596230 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596233 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596237 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596240 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596244 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596247 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596251 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596255 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596258 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596262 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596267 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596272 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596276 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596280 4782 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596285 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596290 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596295 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596299 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596303 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596307 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596311 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596315 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596319 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596322 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596326 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596329 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596333 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596337 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596340 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596343 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596347 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596351 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596355 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596359 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596363 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596366 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596370 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596374 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596378 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596382 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596386 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596389 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596402 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596407 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596411 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596415 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596420 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596424 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596428 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596431 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596435 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596439 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.596444 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.596451 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.608414 4782 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.608459 4782 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608528 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608536 4782 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608542 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608547 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608550 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608557 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608561 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608566 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608570 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608574 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608578 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608583 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608589 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608594 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608598 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608602 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608606 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608612 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608618 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608624 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608631 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608651 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608657 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608662 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608666 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608670 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608674 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608678 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608682 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608687 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608692 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608697 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608701 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608707 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608713 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608717 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608721 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608724 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608728 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608732 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608736 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608740 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608743 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608749 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608753 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608757 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608762 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608766 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608770 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608775 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608779 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608783 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608788 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608792 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608795 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608800 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608803 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608807 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608811 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608815 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608819 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608823 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608827 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608831 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608835 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608841 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608845 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608849 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608853 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608857 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608862 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.608869 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.608997 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609005 4782 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609012 4782 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609018 4782 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609025 4782 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609030 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609035 4782 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609039 4782 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609045 4782 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609050 4782 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609055 4782 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609059 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609063 4782 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609068 4782 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609073 4782 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609078 4782 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609082 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609087 4782 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609091 4782 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609095 4782 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609099 4782 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609103 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609107 4782 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609111 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609114 4782 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609119 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609124 4782 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609128 4782 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609131 4782 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609136 4782 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609139 4782 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609143 4782 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609147 4782 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609151 4782 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609156 4782 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609160 4782 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609165 4782 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609169 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609173 4782 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609177 4782 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609180 4782 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609184 4782 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609188 4782 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609192 4782 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609195 4782 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609200 4782 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609203 4782 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609207 4782 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609211 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609215 4782 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609220 4782 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609225 4782 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609230 4782 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609234 4782 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609237 4782 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609241 4782 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609245 4782 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609250 4782 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609254 4782 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609258 4782 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609261 4782 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609265 4782 feature_gate.go:330] unrecognized feature gate: Example Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609269 4782 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609273 4782 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609276 4782 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609280 4782 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609285 4782 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609290 4782 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609294 4782 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609299 4782 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.609305 4782 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.609313 4782 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.609545 4782 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.614053 4782 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.614217 4782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.616195 4782 server.go:997] "Starting client certificate rotation" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.616224 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.617397 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 16:00:12.029869556 +0000 UTC Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.617554 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.646959 4782 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.650168 4782 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.650917 4782 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.666571 4782 log.go:25] "Validated CRI v1 runtime API" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.703533 4782 log.go:25] "Validated CRI v1 image API" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.705629 4782 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.711496 4782 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-10-32-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.711545 4782 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.724131 4782 manager.go:217] Machine: {Timestamp:2026-02-02 10:38:40.721559158 +0000 UTC m=+0.605751884 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b85e9547-662e-4455-bbaa-2d2f2aaad904 BootID:9f06aea5-54f4-4b11-8fec-22fbe76ec89b Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9d:ee:be Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9d:ee:be Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:bb:86:68 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d0:22:e2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f7:8c:d8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:bf:a9:f8 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:af:55:de Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:11:6b:87:92:1a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:9a:8d:f2:ed:24 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.724330 4782 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.724555 4782 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.724903 4782 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725121 4782 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725167 4782 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725361 4782 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725371 4782 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725897 4782 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.725927 4782 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.726201 4782 state_mem.go:36] "Initialized new in-memory state store" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.726291 4782 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730299 4782 kubelet.go:418] "Attempting to sync node with API server" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730332 4782 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730360 4782 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730374 4782 kubelet.go:324] "Adding apiserver pod source" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.730388 4782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.735458 4782 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.736408 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.736413 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.736505 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.736524 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.737447 4782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.742263 4782 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744145 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744190 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744207 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744221 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744244 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744259 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744276 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744299 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744318 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744332 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744354 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.744368 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.746755 4782 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.747952 4782 server.go:1280] "Started kubelet" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.748421 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.750259 4782 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.750252 4782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 10:38:40 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.751102 4782 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.753154 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.753398 4782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.759375 4782 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.759401 4782 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.759592 4782 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.759896 4782 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.760751 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:05:35.587244519 +0000 UTC Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.759165 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189067be566ae12c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:38:40.747897132 +0000 UTC m=+0.632089888,LastTimestamp:2026-02-02 10:38:40.747897132 +0000 UTC m=+0.632089888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.760787 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.762583 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.762766 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.768190 4782 server.go:460] "Adding debug handlers to kubelet server" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.769539 4782 factory.go:55] Registering systemd factory Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.769590 4782 factory.go:221] Registration of the systemd container factory successfully Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772084 4782 factory.go:153] Registering CRI-O factory Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772216 4782 factory.go:221] Registration of the crio container factory successfully Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772505 4782 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772607 4782 factory.go:103] Registering Raw factory Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.772703 4782 manager.go:1196] Started watching for new ooms in manager Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773787 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773850 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773866 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773881 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773893 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773930 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773945 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773958 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773975 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.773990 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774004 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774018 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774032 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774093 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774107 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774120 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774167 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774182 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774194 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774207 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774226 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774240 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774254 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774269 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774284 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774301 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774323 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774341 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774361 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774377 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774394 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774445 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774463 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774481 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774499 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774514 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774531 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774545 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774560 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774575 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774589 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774603 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774618 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774633 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774669 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774687 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774701 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774715 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774731 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774745 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774761 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774773 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774795 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774814 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774829 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774844 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774859 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774880 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774894 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774909 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774923 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774939 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774954 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774960 4782 manager.go:319] Starting recovery of all containers Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.774969 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.777496 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.778456 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.779508 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.779550 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.779572 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782559 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782648 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782729 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782830 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782897 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.782968 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783055 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783160 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783228 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783287 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783352 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783416 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783481 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783545 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783609 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783699 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783762 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783833 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.783900 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784014 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784095 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784156 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784227 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784287 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784342 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784398 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784455 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784513 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784581 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784652 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784716 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784777 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784833 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784898 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.784958 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785058 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785126 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785185 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785249 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785307 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785364 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785422 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785477 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785536 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785612 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785686 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785744 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785800 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785856 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785926 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.785988 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786042 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786098 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786179 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786245 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786303 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786358 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786414 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786471 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786534 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786602 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786727 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786802 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786871 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.786932 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791303 4782 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791391 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791425 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791451 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791474 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791498 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791525 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791551 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791575 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791599 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791626 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791687 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791718 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791766 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791798 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791831 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791861 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791889 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791915 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791945 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.791974 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792004 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792033 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792064 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792094 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792125 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792382 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792434 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792458 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792480 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792502 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792525 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792548 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792569 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792590 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792612 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792632 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792691 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792744 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792770 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792798 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792827 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792864 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792890 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792920 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792950 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.792979 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793011 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793041 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793072 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793102 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793131 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793162 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793191 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793217 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793246 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793273 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793302 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793332 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793365 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793396 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793429 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793458 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793489 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793523 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793554 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793585 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793631 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793695 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793729 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793758 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793788 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793822 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793856 4782 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793899 4782 reconstruct.go:97] "Volume reconstruction finished" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.793918 4782 reconciler.go:26] "Reconciler: start to sync state" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.799960 4782 manager.go:324] Recovery completed Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.810568 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.815224 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.815267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.815288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.817481 4782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.819500 4782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.819615 4782 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.819734 4782 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.819980 4782 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 10:38:40 crc kubenswrapper[4782]: W0202 10:38:40.822076 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.822324 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.824337 4782 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.824435 4782 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.824515 4782 state_mem.go:36] "Initialized new in-memory state store" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.844727 4782 policy_none.go:49] "None policy: Start" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.846176 4782 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.846217 4782 state_mem.go:35] "Initializing new in-memory state store" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.860439 4782 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.894803 4782 manager.go:334] "Starting Device Plugin manager" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.894859 4782 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.894870 4782 server.go:79] "Starting device plugin registration server" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895343 4782 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895358 4782 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895751 4782 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895832 4782 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.895851 4782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.905354 4782 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.920752 4782 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.920853 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.922900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.922935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.922947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923050 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923468 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923553 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.923977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924089 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924272 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924332 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924822 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.924929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925283 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925322 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925562 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925589 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925726 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925841 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925926 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.925953 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926272 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926582 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926611 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.926926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.927403 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.927424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.927432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:40 crc kubenswrapper[4782]: E0202 10:38:40.961487 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997359 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997417 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997468 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997495 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997516 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997538 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997559 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997579 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997601 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997759 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997814 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997855 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997901 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.997938 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:40 crc kubenswrapper[4782]: I0202 10:38:40.998005 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.000578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.000619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.000632 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.000683 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.001123 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.098894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.098945 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.098966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.098981 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099018 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099033 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099049 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099078 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099093 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099109 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099118 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099163 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099201 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099135 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099149 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099232 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099241 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099279 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099189 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099196 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099210 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099345 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099347 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099361 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099355 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.099130 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.202032 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.203668 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.203725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.203736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.203760 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.204069 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.254002 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.260545 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.281684 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.307984 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.313519 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.314672 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d2dfd48b7a8991519e79cf891212615207531c563922d7ea8469e2a2a9528eb2 WatchSource:0}: Error finding container d2dfd48b7a8991519e79cf891212615207531c563922d7ea8469e2a2a9528eb2: Status 404 returned error can't find the container with id d2dfd48b7a8991519e79cf891212615207531c563922d7ea8469e2a2a9528eb2 Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.315130 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d719f76f70a8a4073068478013e93bb109862e474c549a8b98071fa332d723de WatchSource:0}: Error finding container d719f76f70a8a4073068478013e93bb109862e474c549a8b98071fa332d723de: Status 404 returned error can't find the container with id d719f76f70a8a4073068478013e93bb109862e474c549a8b98071fa332d723de Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.320001 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a64c6ae9b4c805bb828af3c4df452a41e97a76c50cef1fa957bda220f2c22661 WatchSource:0}: Error finding container a64c6ae9b4c805bb828af3c4df452a41e97a76c50cef1fa957bda220f2c22661: Status 404 returned error can't find the container with id a64c6ae9b4c805bb828af3c4df452a41e97a76c50cef1fa957bda220f2c22661 Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.323903 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-903fb7d89ee39c80c4e9fdbefe41d47c94527055ae8d99cf00d11c7467c54b8f WatchSource:0}: Error finding container 903fb7d89ee39c80c4e9fdbefe41d47c94527055ae8d99cf00d11c7467c54b8f: Status 404 returned error can't find the container with id 903fb7d89ee39c80c4e9fdbefe41d47c94527055ae8d99cf00d11c7467c54b8f Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.330374 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-13fad8c88b566c8e02ec0d97c7debb99483ec5a33c6a0841c33e1c17304a68e5 WatchSource:0}: Error finding container 13fad8c88b566c8e02ec0d97c7debb99483ec5a33c6a0841c33e1c17304a68e5: Status 404 returned error can't find the container with id 13fad8c88b566c8e02ec0d97c7debb99483ec5a33c6a0841c33e1c17304a68e5 Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.362955 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.588324 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.588420 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.604705 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.606156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.606191 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.606200 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.606221 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.606518 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.750623 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.761681 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:29:57.806259927 +0000 UTC Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.824581 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a64c6ae9b4c805bb828af3c4df452a41e97a76c50cef1fa957bda220f2c22661"} Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.825382 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d719f76f70a8a4073068478013e93bb109862e474c549a8b98071fa332d723de"} Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.826081 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d2dfd48b7a8991519e79cf891212615207531c563922d7ea8469e2a2a9528eb2"} Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.826754 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"13fad8c88b566c8e02ec0d97c7debb99483ec5a33c6a0841c33e1c17304a68e5"} Feb 02 10:38:41 crc kubenswrapper[4782]: I0202 10:38:41.827361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"903fb7d89ee39c80c4e9fdbefe41d47c94527055ae8d99cf00d11c7467c54b8f"} Feb 02 10:38:41 crc kubenswrapper[4782]: W0202 10:38:41.853574 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:41 crc kubenswrapper[4782]: E0202 10:38:41.853873 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:42 crc kubenswrapper[4782]: W0202 10:38:42.153432 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.153563 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.163581 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 02 10:38:42 crc kubenswrapper[4782]: W0202 10:38:42.210414 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.210523 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.406954 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.408194 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.408256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.408266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.408285 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.408821 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.717971 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:38:42 crc kubenswrapper[4782]: E0202 10:38:42.718959 4782 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.749879 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.761980 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:48:40.389130723 +0000 UTC Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833099 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833110 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833131 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.833151 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.834024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.834062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.834075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.835459 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e" exitCode=0 Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.835536 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.835578 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.838692 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.838731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.838741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.839719 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444" exitCode=0 Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.839793 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.839845 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840082 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840787 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840820 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.840900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.841416 4782 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624" exitCode=0 Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.841541 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.841556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.842673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.842695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.842704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.844605 4782 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501" exitCode=0 Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.844652 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501"} Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.844743 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.845410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.845433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:42 crc kubenswrapper[4782]: I0202 10:38:42.845441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:43 crc kubenswrapper[4782]: W0202 10:38:43.228453 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:43 crc kubenswrapper[4782]: E0202 10:38:43.228518 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.375321 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.400602 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.409297 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:43 crc kubenswrapper[4782]: W0202 10:38:43.734099 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:43 crc kubenswrapper[4782]: E0202 10:38:43.735167 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.748980 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.762318 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:22:58.686815138 +0000 UTC Feb 02 10:38:43 crc kubenswrapper[4782]: E0202 10:38:43.764963 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.850326 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c" exitCode=0 Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.850436 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.850445 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.851918 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.851961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.851977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.854191 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.854193 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.855265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.855310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.855322 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.862955 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.862996 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863008 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863016 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.863929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.865925 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.865956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.865970 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.865981 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0"} Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.866029 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.866862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.866898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:43 crc kubenswrapper[4782]: I0202 10:38:43.866911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.009549 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.010924 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.010956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.010965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.010987 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:44 crc kubenswrapper[4782]: E0202 10:38:44.011431 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Feb 02 10:38:44 crc kubenswrapper[4782]: W0202 10:38:44.092238 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:44 crc kubenswrapper[4782]: E0202 10:38:44.092318 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.748948 4782 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.762721 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:06:12.892004579 +0000 UTC Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.766120 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.871564 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a"} Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.871707 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.872495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.872518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.872526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.874329 4782 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920" exitCode=0 Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.874419 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.874827 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.875088 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920"} Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.875140 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.875447 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.875755 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879366 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879376 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.879900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880328 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:44 crc kubenswrapper[4782]: I0202 10:38:44.880827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:44 crc kubenswrapper[4782]: W0202 10:38:44.945078 4782 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Feb 02 10:38:44 crc kubenswrapper[4782]: E0202 10:38:44.945144 4782 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.762897 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:48:06.159362388 +0000 UTC Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.878496 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.882005 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a" exitCode=255 Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.882093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.882263 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.883106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.883138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.883151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.883943 4782 scope.go:117] "RemoveContainer" containerID="1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893165 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893238 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893349 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893242 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893772 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.893803 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584"} Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.894424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.894458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.894470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.895210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.895247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:45 crc kubenswrapper[4782]: I0202 10:38:45.895262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.016528 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.763605 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:51:57.560792971 +0000 UTC Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.811832 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.898147 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.899924 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057"} Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.900018 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.900061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.901412 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.901442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.901452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.904167 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54"} Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.904312 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.905231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.905348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:46 crc kubenswrapper[4782]: I0202 10:38:46.905428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.211623 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.213426 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.213468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.213479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.213507 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.763790 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:04:58.144457459 +0000 UTC Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.767070 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.767177 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.907293 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.907354 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.907972 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908913 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.908933 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:47 crc kubenswrapper[4782]: I0202 10:38:47.926903 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.764852 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:43:09.639102995 +0000 UTC Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.909848 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.909971 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.911688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.911821 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.911911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.912159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.912268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:48 crc kubenswrapper[4782]: I0202 10:38:48.912362 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.190880 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.579342 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.579568 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.580748 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.580810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.580824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.766454 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:25:44.861442269 +0000 UTC Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.912186 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.913216 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.913247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:49 crc kubenswrapper[4782]: I0202 10:38:49.913256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:50 crc kubenswrapper[4782]: I0202 10:38:50.767041 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:21:21.561627553 +0000 UTC Feb 02 10:38:50 crc kubenswrapper[4782]: E0202 10:38:50.905512 4782 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 10:38:51 crc kubenswrapper[4782]: I0202 10:38:51.767835 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:22:16.645639333 +0000 UTC Feb 02 10:38:52 crc kubenswrapper[4782]: I0202 10:38:52.768122 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:17:40.066838006 +0000 UTC Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.405481 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.405678 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.406982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.407021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.407037 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:53 crc kubenswrapper[4782]: I0202 10:38:53.768940 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:12:15.61346427 +0000 UTC Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.195995 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.196299 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.197903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.197969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.197986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:54 crc kubenswrapper[4782]: I0202 10:38:54.770006 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:27:22.861592852 +0000 UTC Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.239764 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.239847 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.252625 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.252736 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 10:38:55 crc kubenswrapper[4782]: I0202 10:38:55.770418 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:13:45.874839686 +0000 UTC Feb 02 10:38:56 crc kubenswrapper[4782]: I0202 10:38:56.770610 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:03:55.310696848 +0000 UTC Feb 02 10:38:57 crc kubenswrapper[4782]: I0202 10:38:57.767047 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:38:57 crc kubenswrapper[4782]: I0202 10:38:57.767520 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:38:57 crc kubenswrapper[4782]: I0202 10:38:57.771527 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:46:22.026396344 +0000 UTC Feb 02 10:38:58 crc kubenswrapper[4782]: I0202 10:38:58.772251 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:09:00.918501712 +0000 UTC Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.196928 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.197223 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.197958 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.198041 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.198466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.198510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.198521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.202346 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.247105 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.247167 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.772711 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:25:32.747483782 +0000 UTC Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.937447 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.937832 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.937892 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.938610 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.938703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:38:59 crc kubenswrapper[4782]: I0202 10:38:59.938725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.251696 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.257046 4782 trace.go:236] Trace[166587467]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 10:38:49.955) (total time: 10301ms): Feb 02 10:39:00 crc kubenswrapper[4782]: Trace[166587467]: ---"Objects listed" error: 10301ms (10:39:00.256) Feb 02 10:39:00 crc kubenswrapper[4782]: Trace[166587467]: [10.301939594s] [10.301939594s] END Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.257080 4782 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.259427 4782 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.259464 4782 trace.go:236] Trace[2122569030]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 10:38:47.255) (total time: 13003ms): Feb 02 10:39:00 crc kubenswrapper[4782]: Trace[2122569030]: ---"Objects listed" error: 13003ms (10:39:00.259) Feb 02 10:39:00 crc kubenswrapper[4782]: Trace[2122569030]: [13.003595325s] [13.003595325s] END Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.259493 4782 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.259498 4782 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.259579 4782 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.261450 4782 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.265791 4782 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.742524 4782 apiserver.go:52] "Watching apiserver" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.772569 4782 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.772847 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:45:51.480582118 +0000 UTC Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.772987 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773404 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773437 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.773498 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.773862 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773957 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.774024 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.773960 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.774162 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.778851 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.778909 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.778991 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.779225 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.779610 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.779969 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.780911 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.782770 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.785992 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.806907 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.834796 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.846817 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.861102 4782 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863459 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863483 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863504 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863580 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863654 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863682 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863721 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863758 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863782 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863827 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863847 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863870 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863918 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863939 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863962 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863993 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864013 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864033 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864057 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864079 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864130 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864193 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864217 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864238 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864258 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864280 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864326 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864400 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864435 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864465 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864497 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864529 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864549 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864572 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864592 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864621 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864658 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864683 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864716 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864751 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864775 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864800 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864844 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864863 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864885 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864907 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864930 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864952 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864979 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865001 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865047 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865071 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865091 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865110 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865132 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865155 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865178 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865220 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865260 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865283 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865305 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865328 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865355 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865377 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865397 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865418 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865445 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865477 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865510 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865536 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865564 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865590 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865613 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865660 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865686 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865713 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865738 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865768 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865832 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865856 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865885 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865910 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865931 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865959 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865984 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866004 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866027 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866049 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866073 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866097 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866119 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866141 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866165 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866192 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866216 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866242 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866264 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866289 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866311 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866333 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866355 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866383 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866417 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866443 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866471 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866498 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866524 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866548 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866633 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866676 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866701 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866722 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866746 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866770 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866795 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866839 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866863 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866885 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866908 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866931 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866953 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866978 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867001 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867028 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867051 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867077 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867103 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867129 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867163 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867188 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867213 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867261 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867309 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867337 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867362 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867388 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867415 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867463 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867492 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867559 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867581 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867607 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867632 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867679 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867704 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867728 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867753 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867780 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867824 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867846 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867863 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867880 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867899 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867932 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867952 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867989 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868006 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868025 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868043 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868060 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868079 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868118 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868139 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868160 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868179 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868200 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868219 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868256 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868274 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868297 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868323 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868352 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868377 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868402 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868523 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868542 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868559 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868577 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868595 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868696 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868720 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868741 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868762 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868791 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868815 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868858 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868875 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868942 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868992 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869786 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863771 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.863784 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864022 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864059 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864248 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864548 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.864775 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865020 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865125 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865314 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865497 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865593 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865731 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865850 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.865872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866199 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866257 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866377 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.870036 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866391 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866409 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866575 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866797 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.866993 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.867297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.868941 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869041 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869260 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869439 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869454 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869625 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869850 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.869847 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.870587 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.870785 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.870933 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.871097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.874556 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.874608 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.874953 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.874967 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875042 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875211 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875267 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875296 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875725 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.875951 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876015 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876055 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876342 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876678 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876710 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.876747 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.877566 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878402 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878509 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878569 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878592 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878585 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878853 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878891 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.878976 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.879630 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.879659 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.879771 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880244 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880326 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880379 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880389 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.880692 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881109 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881211 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881324 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881674 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.881686 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882075 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882077 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882129 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882554 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882843 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.882963 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883216 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883245 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883282 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883507 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883609 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.883827 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884073 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884288 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884446 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884496 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884504 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884727 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.884740 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.885037 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887172 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887283 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887610 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887696 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887834 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.887899 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888202 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888322 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888537 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888715 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888717 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888809 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.888948 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889085 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889228 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889436 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889627 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889727 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889759 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889756 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889787 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889888 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.889982 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890089 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890115 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890299 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890381 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890562 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890592 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890623 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.890938 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891132 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891260 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891377 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891426 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891432 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891522 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891569 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891795 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891815 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.891862 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892107 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892168 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892003 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892285 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892364 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892594 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.892791 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.892962 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.392934729 +0000 UTC m=+21.277127615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893074 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893388 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893495 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893504 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893855 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.893924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894050 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894160 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894273 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894442 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894612 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894622 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.894796 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895043 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895071 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895131 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895260 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895333 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895591 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895721 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895797 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895863 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.895962 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.896252 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.896681 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.896914 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.896975 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897109 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897229 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897371 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897494 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897574 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897724 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897773 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897933 4782 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.947454 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.947896 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.950128 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.951474 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.897954 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.898135 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.898217 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.898419 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.898557 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.898920 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946009 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946136 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946321 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946445 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.946598 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.947086 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.955527 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.955592 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.955617 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.955721 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.455700362 +0000 UTC m=+21.339893078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.956204 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.961698 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.461662458 +0000 UTC m=+21.345855244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.968552 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.968619 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.968662 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.970835 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.971805 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.973442 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.46890273 +0000 UTC m=+21.353095446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:00 crc kubenswrapper[4782]: E0202 10:39:00.973494 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:01.473474018 +0000 UTC m=+21.357666734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973611 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973660 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973747 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973773 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973788 4782 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973800 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973814 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973826 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973840 4782 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973853 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973865 4782 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973882 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973903 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973925 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973940 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973952 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973963 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973974 4782 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973985 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.973997 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974009 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974019 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974030 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974043 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974053 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974065 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974077 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974088 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974100 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974112 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974124 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974136 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974147 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974160 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974171 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974182 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974195 4782 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974206 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974218 4782 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974229 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974240 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974250 4782 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974261 4782 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974274 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974286 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974299 4782 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974312 4782 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974322 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974332 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974344 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974356 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974367 4782 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974377 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974389 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974400 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974410 4782 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974429 4782 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974442 4782 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974455 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974466 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974477 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974489 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974500 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974512 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974523 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974534 4782 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974544 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974556 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974568 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974580 4782 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974594 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974606 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974616 4782 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974627 4782 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.974726 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.977001 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.977554 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.975113 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981735 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981801 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981818 4782 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981834 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981847 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981882 4782 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981896 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981909 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981922 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981933 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981971 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981983 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.981996 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982010 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982044 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982057 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982069 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982080 4782 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982094 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982127 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982138 4782 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982150 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982162 4782 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982174 4782 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982205 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982217 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982228 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982240 4782 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982251 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982284 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982296 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982307 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982322 4782 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982355 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982394 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982434 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982448 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982460 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982474 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982514 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982528 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982542 4782 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982554 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982569 4782 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982600 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982613 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982625 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982662 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982677 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982690 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982701 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982710 4782 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982737 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982748 4782 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982757 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982768 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982779 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982835 4782 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982851 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982865 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982899 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982916 4782 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982928 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982941 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982954 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.982990 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983002 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983015 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983027 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983059 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983072 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983085 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983097 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983110 4782 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983144 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983159 4782 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983173 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983185 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983219 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983234 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983247 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983260 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983271 4782 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983305 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983319 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983333 4782 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983346 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983359 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983372 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983409 4782 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983422 4782 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983436 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983448 4782 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983460 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983472 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983484 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983497 4782 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983510 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983544 4782 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983560 4782 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983577 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983589 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983607 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983619 4782 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983631 4782 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983668 4782 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983681 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983693 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983706 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983717 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983729 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983740 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983777 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983788 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983801 4782 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983813 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983824 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.983837 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.991840 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.994992 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:00 crc kubenswrapper[4782]: I0202 10:39:00.996621 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.004301 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.006347 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.011086 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.019433 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.033857 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.046717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.061806 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.073979 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.085102 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.085207 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.085228 4782 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.091361 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.102773 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.105035 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.114760 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.125780 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.488957 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.489046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489075 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.489054024 +0000 UTC m=+22.373246760 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.489107 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489122 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.489140 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489167 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.489155957 +0000 UTC m=+22.373348673 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.489187 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489228 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489261 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.48925163 +0000 UTC m=+22.373444346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489272 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489287 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489301 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489317 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489330 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.489321232 +0000 UTC m=+22.373513948 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489330 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489345 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.489384 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:02.489374493 +0000 UTC m=+22.373567209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.773005 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:44:53.047483671 +0000 UTC Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.820867 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.821016 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.978866 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.979852 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.982714 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" exitCode=255 Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.982770 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.982845 4782 scope.go:117] "RemoveContainer" containerID="1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a" Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.984994 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"35e06303ce818ef4b29ebc14c989d7b71257426e71e5aca847c457d6b2a42a2e"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.987751 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.987790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.987806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9094eee33ca27e352f9ec60b2b6f21cc1c5ae940ff9c1db454e673c91ea4dea9"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.990549 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.990633 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0017c1161f0c86e08b4713c5155093f0eebe7d88e487f034f8ea0739fac9c056"} Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.997295 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:39:01 crc kubenswrapper[4782]: I0202 10:39:01.998401 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:39:01 crc kubenswrapper[4782]: E0202 10:39:01.998603 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.000251 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.017287 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.033514 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.050904 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.060977 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.072459 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.083824 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.103124 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.118091 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.131592 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.147780 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c05a84703a5d8931496441fab6db1ce52e01b71ffd0ee27cc5fc62a163a428a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"message\\\":\\\"W0202 10:38:44.297012 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 10:38:44.297318 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770028724 cert, and key in /tmp/serving-cert-2718931427/serving-signer.crt, /tmp/serving-cert-2718931427/serving-signer.key\\\\nI0202 10:38:44.688709 1 observer_polling.go:159] Starting file observer\\\\nW0202 10:38:44.691818 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 10:38:44.691936 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:38:44.692850 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2718931427/tls.crt::/tmp/serving-cert-2718931427/tls.key\\\\\\\"\\\\nF0202 10:38:44.956753 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.164431 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.178789 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.500913 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.501003 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.501030 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.501051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.501071 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501127 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501178 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501164225 +0000 UTC m=+24.385356941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501207 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501216 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501220 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501247 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501261 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501252037 +0000 UTC m=+24.385444753 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501285 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501277428 +0000 UTC m=+24.385470134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501296 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501341 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501358 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501425 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501398831 +0000 UTC m=+24.385591597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.501532 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:04.501521225 +0000 UTC m=+24.385713941 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.773701 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:13:27.808265878 +0000 UTC Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.820949 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.821088 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.820949 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.821467 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.826411 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.827058 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.828837 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.830030 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.831314 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.832840 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.835366 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.836237 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.837909 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.838841 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.840239 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.841240 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.841885 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.842569 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.843287 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.843944 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.844608 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.845090 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.845808 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.846428 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.847209 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.847882 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.848388 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.849133 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.850809 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.851729 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.853123 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.853630 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.854304 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.855273 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.855791 4782 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.855897 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.858716 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.859660 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.860180 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.862938 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.863712 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.864333 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.865497 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.866808 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.867854 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.868892 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.869677 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.870403 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.870951 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.871604 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.872200 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.873163 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.873766 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.874393 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.875049 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.875754 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.876501 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.877116 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.994914 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:39:02 crc kubenswrapper[4782]: I0202 10:39:02.996969 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:39:02 crc kubenswrapper[4782]: E0202 10:39:02.997260 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.010460 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.025498 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.043007 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.056366 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.070206 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.083448 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.106499 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.774173 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:53:49.262639123 +0000 UTC Feb 02 10:39:03 crc kubenswrapper[4782]: I0202 10:39:03.820151 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:03 crc kubenswrapper[4782]: E0202 10:39:03.820280 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.283413 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.296108 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.298840 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.309137 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.328083 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.345321 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.364333 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.381945 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.398218 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.414898 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.430597 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.444634 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.461060 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.479694 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.493384 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.508625 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528621 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528723 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528754 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.528807 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.528974 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.528998 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529011 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529007 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529065 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529043947 +0000 UTC m=+28.413236663 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529109 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529081828 +0000 UTC m=+28.413274594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.528974 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529147 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529166 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529234 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529224602 +0000 UTC m=+28.413417408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529330 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529361 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529352186 +0000 UTC m=+28.413545012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.529488 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:08.529476859 +0000 UTC m=+28.413669645 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.536711 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.555986 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.771749 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.774309 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:15:07.568641017 +0000 UTC Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.776851 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.787445 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.789580 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.815080 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.820205 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.820332 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.820409 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:04 crc kubenswrapper[4782]: E0202 10:39:04.820523 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.830453 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.845765 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.860563 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.876731 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.890754 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.907939 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.923428 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.939593 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.955037 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.969103 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:04 crc kubenswrapper[4782]: I0202 10:39:04.991159 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:04Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.002397 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a"} Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.009192 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: E0202 10:39:05.011111 4782 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.026806 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.041981 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.058430 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.083429 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.103264 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.118185 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.131779 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.156960 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.170284 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.183440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.217594 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.238183 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:05Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.774806 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:03:35.906462922 +0000 UTC Feb 02 10:39:05 crc kubenswrapper[4782]: I0202 10:39:05.820438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:05 crc kubenswrapper[4782]: E0202 10:39:05.820628 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.660467 4782 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.662570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.662780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.662885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.663029 4782 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.670852 4782 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.671356 4782 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.672741 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.691345 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695701 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.695786 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.711779 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.715975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.716022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.716042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.716060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.716071 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.731227 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.735765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.736045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.736151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.736236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.736310 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.752965 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.757841 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.758073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.758175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.758259 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.758344 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.775117 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:14:33.405042196 +0000 UTC Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.775758 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:06Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.775992 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777387 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.777425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.820824 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.820880 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.820962 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:06 crc kubenswrapper[4782]: E0202 10:39:06.821326 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.879598 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:06 crc kubenswrapper[4782]: I0202 10:39:06.982491 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:06Z","lastTransitionTime":"2026-02-02T10:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085951 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.085999 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188758 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.188890 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.291471 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.394084 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496580 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496633 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.496708 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598833 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598942 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.598979 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701851 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.701886 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.775285 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:24:37.57703347 +0000 UTC Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804194 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.804221 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.820428 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:07 crc kubenswrapper[4782]: E0202 10:39:07.820550 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.906727 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:07Z","lastTransitionTime":"2026-02-02T10:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.926558 4782 csr.go:261] certificate signing request csr-qkjks is approved, waiting to be issued Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.935370 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fptzv"] Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.935751 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.938960 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.940268 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.953991 4782 csr.go:257] certificate signing request csr-qkjks is issued Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.954770 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.954886 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 10:39:07 crc kubenswrapper[4782]: I0202 10:39:07.969695 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.002608 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:07Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.009736 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.044602 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.063740 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.089369 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6np9\" (UniqueName: \"kubernetes.io/projected/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-kube-api-access-w6np9\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.089419 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-hosts-file\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.089728 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.108796 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112375 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.112407 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.141724 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.170384 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.187867 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.190308 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6np9\" (UniqueName: \"kubernetes.io/projected/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-kube-api-access-w6np9\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.190337 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-hosts-file\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.190424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-hosts-file\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.212673 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6np9\" (UniqueName: \"kubernetes.io/projected/fa0a3c57-fe47-43dd-8905-00df4cae4fb8-kube-api-access-w6np9\") pod \"node-resolver-fptzv\" (UID: \"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\") " pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.217084 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.248045 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fptzv" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.325908 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428197 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.428208 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.448037 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fsqgq"] Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.448274 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bhdgk"] Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.448406 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.448502 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.450004 4782 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.450039 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.450554 4782 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.450580 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.450934 4782 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.450962 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.451258 4782 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.451278 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.451358 4782 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.451374 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.451606 4782 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.451626 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.451929 4782 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.451954 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.452046 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 10:39:08 crc kubenswrapper[4782]: W0202 10:39:08.452951 4782 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.452979 4782 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.453723 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.466860 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.481913 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.499490 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.519360 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.530562 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.530819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.530927 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.531007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.531073 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.545274 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.571017 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.589205 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593280 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593334 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593353 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-netns\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593368 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cni-binary-copy\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593389 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-k8s-cni-cncf-io\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-os-release\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593450 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nrfs\" (UniqueName: \"kubernetes.io/projected/04d9744a-e730-45b4-9f0c-bbb5b02cd311-kube-api-access-7nrfs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593466 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7919e98f-cc47-4f3c-9c53-6313850ea7b8-rootfs\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593483 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7919e98f-cc47-4f3c-9c53-6313850ea7b8-proxy-tls\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593498 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-system-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593515 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cnibin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593532 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593546 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-kubelet\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593559 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-hostroot\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593574 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-conf-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593588 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-etc-kubernetes\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593622 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7919e98f-cc47-4f3c-9c53-6313850ea7b8-mcd-auth-proxy-config\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593652 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfn4s\" (UniqueName: \"kubernetes.io/projected/7919e98f-cc47-4f3c-9c53-6313850ea7b8-kube-api-access-sfn4s\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593666 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-bin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593680 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593697 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593715 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-socket-dir-parent\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593730 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-multus\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.593743 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-multus-certs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593817 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.593804473 +0000 UTC m=+36.477997189 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593931 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593943 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593953 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.593979 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.593973308 +0000 UTC m=+36.478166024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594214 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594245 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.594237865 +0000 UTC m=+36.478430581 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594307 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594322 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594330 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594351 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.594345508 +0000 UTC m=+36.478538224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594411 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.594438 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:16.594431081 +0000 UTC m=+36.478623797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.619282 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633110 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633149 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633169 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.633722 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.651280 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.674220 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694427 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-multus\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-multus-certs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694604 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-multus\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694774 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-socket-dir-parent\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694858 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694864 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-multus-certs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694888 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-netns\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694939 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-netns\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694956 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cni-binary-copy\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694975 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.694992 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-k8s-cni-cncf-io\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-run-k8s-cni-cncf-io\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695028 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7919e98f-cc47-4f3c-9c53-6313850ea7b8-rootfs\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695053 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7919e98f-cc47-4f3c-9c53-6313850ea7b8-rootfs\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-os-release\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695086 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nrfs\" (UniqueName: \"kubernetes.io/projected/04d9744a-e730-45b4-9f0c-bbb5b02cd311-kube-api-access-7nrfs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695106 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-os-release\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695116 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-system-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695142 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cnibin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7919e98f-cc47-4f3c-9c53-6313850ea7b8-proxy-tls\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695184 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-hostroot\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-conf-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cnibin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695219 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-etc-kubernetes\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695232 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-system-cni-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695241 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-hostroot\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695254 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-kubelet\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-etc-kubernetes\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695268 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-conf-dir\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695277 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfn4s\" (UniqueName: \"kubernetes.io/projected/7919e98f-cc47-4f3c-9c53-6313850ea7b8-kube-api-access-sfn4s\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695302 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-bin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695323 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695284 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-kubelet\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695355 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7919e98f-cc47-4f3c-9c53-6313850ea7b8-mcd-auth-proxy-config\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695372 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-host-var-lib-cni-bin\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.695873 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-socket-dir-parent\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.706132 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.728502 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.735988 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.756212 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.776411 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:05:55.285114617 +0000 UTC Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.782701 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.820984 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.821453 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.821655 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.822035 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:08 crc kubenswrapper[4782]: E0202 10:39:08.822091 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.837897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.838128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.838221 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.838308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.838379 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.867386 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.877939 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8lwfx"] Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.878621 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.887276 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prbrn"] Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.888145 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.889072 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.892449 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.892685 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.895382 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.896103 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.897490 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.897704 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.902012 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.910066 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.911835 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.940789 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:08Z","lastTransitionTime":"2026-02-02T10:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.952742 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.955834 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 10:34:07 +0000 UTC, rotation deadline is 2026-12-26 05:07:12.499272347 +0000 UTC Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.955983 4782 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7842h28m3.543294181s for next certificate rotation Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.976384 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:08 crc kubenswrapper[4782]: I0202 10:39:08.993901 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.002977 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdc79\" (UniqueName: \"kubernetes.io/projected/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-kube-api-access-cdc79\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003022 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003053 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-os-release\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003079 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-system-cni-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003102 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003123 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003188 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003246 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003313 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003342 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003370 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003410 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003445 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003470 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003491 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003536 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003627 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003671 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003716 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003753 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cnibin\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003810 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003867 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.003935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.012161 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fptzv" event={"ID":"fa0a3c57-fe47-43dd-8905-00df4cae4fb8","Type":"ContainerStarted","Data":"fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.012211 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fptzv" event={"ID":"fa0a3c57-fe47-43dd-8905-00df4cae4fb8","Type":"ContainerStarted","Data":"15426a6ab4d45af37c4415df495b075538111dc47d3de3cd9e5cc2ece82fb0d3"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.013465 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.030042 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042820 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.042906 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.050560 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.064806 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.077978 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.089811 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104742 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104776 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104792 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104823 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cnibin\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104844 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104857 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104910 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104972 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.104995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105017 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdc79\" (UniqueName: \"kubernetes.io/projected/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-kube-api-access-cdc79\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105036 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-os-release\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105065 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-system-cni-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105104 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105123 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105157 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105189 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105214 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105261 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105281 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105301 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105320 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105366 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105387 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105421 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105497 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105523 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105545 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105613 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105700 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cnibin\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105736 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105768 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105804 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105833 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105866 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-os-release\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105890 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-system-cni-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105912 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105933 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105954 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.105967 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.106416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.106524 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.106553 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.106916 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.107260 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.110419 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.110670 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.125097 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") pod \"ovnkube-node-prbrn\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.126161 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.140912 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145486 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.145563 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.155485 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.169509 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.180143 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.196449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.217782 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.217853 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.231476 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: W0202 10:39:09.237434 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2642ee4e_c16a_4e6e_9654_a67666f1bff8.slice/crio-db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6 WatchSource:0}: Error finding container db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6: Status 404 returned error can't find the container with id db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6 Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.246750 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.247440 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:39:09 crc kubenswrapper[4782]: E0202 10:39:09.247599 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248255 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.248286 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.252900 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.264688 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:09Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.275784 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351654 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.351712 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.402664 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.410064 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.410112 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfn4s\" (UniqueName: \"kubernetes.io/projected/7919e98f-cc47-4f3c-9c53-6313850ea7b8-kube-api-access-sfn4s\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.416027 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-cni-binary-copy\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.418418 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-cni-binary-copy\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.444777 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.446290 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7919e98f-cc47-4f3c-9c53-6313850ea7b8-mcd-auth-proxy-config\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.452387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.452558 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.455159 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdc79\" (UniqueName: \"kubernetes.io/projected/1edc5703-bb51-4f8a-9b73-68ba48a40ce8-kube-api-access-cdc79\") pod \"multus-additional-cni-plugins-8lwfx\" (UID: \"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\") " pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459191 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459252 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.459364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.460921 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nrfs\" (UniqueName: \"kubernetes.io/projected/04d9744a-e730-45b4-9f0c-bbb5b02cd311-kube-api-access-7nrfs\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.492366 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" Feb 02 10:39:09 crc kubenswrapper[4782]: W0202 10:39:09.502536 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1edc5703_bb51_4f8a_9b73_68ba48a40ce8.slice/crio-856fb117fa4f01a752b2f536215bb78bd2e592e12fc4eee7014aff2201ccfb49 WatchSource:0}: Error finding container 856fb117fa4f01a752b2f536215bb78bd2e592e12fc4eee7014aff2201ccfb49: Status 404 returned error can't find the container with id 856fb117fa4f01a752b2f536215bb78bd2e592e12fc4eee7014aff2201ccfb49 Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.562403 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.665338 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.668325 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.678851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7919e98f-cc47-4f3c-9c53-6313850ea7b8-proxy-tls\") pod \"machine-config-daemon-bhdgk\" (UID: \"7919e98f-cc47-4f3c-9c53-6313850ea7b8\") " pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:09 crc kubenswrapper[4782]: E0202 10:39:09.696430 4782 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 10:39:09 crc kubenswrapper[4782]: E0202 10:39:09.696592 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config podName:04d9744a-e730-45b4-9f0c-bbb5b02cd311 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:10.196554565 +0000 UTC m=+30.080747281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config") pod "multus-fsqgq" (UID: "04d9744a-e730-45b4-9f0c-bbb5b02cd311") : failed to sync configmap cache: timed out waiting for the condition Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.704440 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:39:09 crc kubenswrapper[4782]: W0202 10:39:09.714845 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7919e98f_cc47_4f3c_9c53_6313850ea7b8.slice/crio-5102c5c3f325a5afa89046628d6d266ec1f65686b83dcaa866c25ad93679be52 WatchSource:0}: Error finding container 5102c5c3f325a5afa89046628d6d266ec1f65686b83dcaa866c25ad93679be52: Status 404 returned error can't find the container with id 5102c5c3f325a5afa89046628d6d266ec1f65686b83dcaa866c25ad93679be52 Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774718 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774758 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.774806 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.777579 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:02:51.516018478 +0000 UTC Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.820772 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:09 crc kubenswrapper[4782]: E0202 10:39:09.820895 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878037 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878080 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878092 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.878119 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.928759 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981811 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:09 crc kubenswrapper[4782]: I0202 10:39:09.981942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:09Z","lastTransitionTime":"2026-02-02T10:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.018556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.018614 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.018627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"5102c5c3f325a5afa89046628d6d266ec1f65686b83dcaa866c25ad93679be52"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.020207 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" exitCode=0 Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.020279 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.020298 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.023118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.023206 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"856fb117fa4f01a752b2f536215bb78bd2e592e12fc4eee7014aff2201ccfb49"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.039902 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.057604 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.078032 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.084932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.084987 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.085002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.085024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.085039 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.095625 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.109579 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.128135 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.149491 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.163172 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.175409 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187405 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.187928 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.209958 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.215409 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.216121 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/04d9744a-e730-45b4-9f0c-bbb5b02cd311-multus-daemon-config\") pod \"multus-fsqgq\" (UID: \"04d9744a-e730-45b4-9f0c-bbb5b02cd311\") " pod="openshift-multus/multus-fsqgq" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.225978 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.242247 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.255267 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.265580 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fsqgq" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.275338 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296582 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.296617 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.297171 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.317254 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.348059 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.366517 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.380908 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.394160 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398671 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.398754 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.408248 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-thvm5"] Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.408655 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.410046 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.411399 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.411912 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.411921 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.411968 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.423750 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.439710 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.459889 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.479040 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.494659 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501282 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.501293 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.505153 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.518362 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70faa63d-a86d-45aa-b6fd-81fa90436da2-host\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.518513 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70faa63d-a86d-45aa-b6fd-81fa90436da2-serviceca\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.518630 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrknp\" (UniqueName: \"kubernetes.io/projected/70faa63d-a86d-45aa-b6fd-81fa90436da2-kube-api-access-vrknp\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.518566 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.531445 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.545167 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.563016 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.581279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604092 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604115 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.604736 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.616933 4782 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619015 4782 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619789 4782 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619829 4782 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619853 4782 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.619873 4782 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: E0202 10:39:10.619891 4782 request.go:1255] Unexpected error when reading response body: read tcp 38.102.83.147:35420->38.102.83.147:6443: use of closed network connection Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.619925 4782 status_manager.go:851] "Failed to get status for pod" podUID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" pod="openshift-multus/multus-fsqgq" err="unexpected error when reading response body. Please retry. Original error: read tcp 38.102.83.147:35420->38.102.83.147:6443: use of closed network connection" Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.620264 4782 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 02 10:39:10 crc kubenswrapper[4782]: E0202 10:39:10.620500 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/events\": read tcp 38.102.83.147:35420->38.102.83.147:6443: use of closed network connection" event="&Event{ObjectMeta:{ovnkube-node-prbrn.189067c54a09181c openshift-ovn-kubernetes 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ovn-kubernetes,Name:ovnkube-node-prbrn,UID:2642ee4e-c16a-4e6e-9654-a67666f1bff8,APIVersion:v1,ResourceVersion:26753,FieldPath:spec.containers{ovn-acl-logging},},Reason:Started,Message:Started container ovn-acl-logging,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:39:10.604933148 +0000 UTC m=+30.489125864,LastTimestamp:2026-02-02 10:39:10.604933148 +0000 UTC m=+30.489125864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.621833 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrknp\" (UniqueName: \"kubernetes.io/projected/70faa63d-a86d-45aa-b6fd-81fa90436da2-kube-api-access-vrknp\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.621861 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70faa63d-a86d-45aa-b6fd-81fa90436da2-host\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.621920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70faa63d-a86d-45aa-b6fd-81fa90436da2-serviceca\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.621999 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/70faa63d-a86d-45aa-b6fd-81fa90436da2-host\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.623262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/70faa63d-a86d-45aa-b6fd-81fa90436da2-serviceca\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.684498 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrknp\" (UniqueName: \"kubernetes.io/projected/70faa63d-a86d-45aa-b6fd-81fa90436da2-kube-api-access-vrknp\") pod \"node-ca-thvm5\" (UID: \"70faa63d-a86d-45aa-b6fd-81fa90436da2\") " pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.690231 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706511 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706565 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.706573 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.721026 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-thvm5" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.724557 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: W0202 10:39:10.732652 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70faa63d_a86d_45aa_b6fd_81fa90436da2.slice/crio-7719e3faa3918ee7f2383e9fb08bc3987d87b83dc30c2646f41afbb9821acfae WatchSource:0}: Error finding container 7719e3faa3918ee7f2383e9fb08bc3987d87b83dc30c2646f41afbb9821acfae: Status 404 returned error can't find the container with id 7719e3faa3918ee7f2383e9fb08bc3987d87b83dc30c2646f41afbb9821acfae Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.739923 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.768485 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.778675 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:58:14.871934673 +0000 UTC Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.782319 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.802787 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.808970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.809004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.809012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.809028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.809036 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.817791 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.820209 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:10 crc kubenswrapper[4782]: E0202 10:39:10.820294 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.820348 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:10 crc kubenswrapper[4782]: E0202 10:39:10.820386 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.828753 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.846618 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.865616 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.883571 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.902625 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.911443 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:10Z","lastTransitionTime":"2026-02-02T10:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.914066 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.931998 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.950795 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.966979 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.983857 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:10 crc kubenswrapper[4782]: I0202 10:39:10.995589 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.007064 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.015906 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.022562 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.026396 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-thvm5" event={"ID":"70faa63d-a86d-45aa-b6fd-81fa90436da2","Type":"ContainerStarted","Data":"bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.026436 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-thvm5" event={"ID":"70faa63d-a86d-45aa-b6fd-81fa90436da2","Type":"ContainerStarted","Data":"7719e3faa3918ee7f2383e9fb08bc3987d87b83dc30c2646f41afbb9821acfae"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.027906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.027964 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"a93dd65fa4a1454836b2ef587d82693a6b41702e637b198b7a7758187c0b626b"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.032235 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.032274 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.032287 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.032295 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.033986 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d" exitCode=0 Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.034010 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.044967 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.058625 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.080618 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.092477 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.111988 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.118835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.119306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.119333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.119350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.119364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.125939 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.159818 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.202394 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221261 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221312 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.221322 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.239086 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.279079 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323682 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.323707 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.326392 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.361826 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.399102 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.425902 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.441010 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.486138 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.521191 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527910 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.527943 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.562020 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.602257 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.610500 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630206 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.630271 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.658958 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732525 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732547 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.732555 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.779278 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:48:40.23479858 +0000 UTC Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.820753 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:11 crc kubenswrapper[4782]: E0202 10:39:11.820871 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834370 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834408 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.834452 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:11 crc kubenswrapper[4782]: I0202 10:39:11.936925 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:11Z","lastTransitionTime":"2026-02-02T10:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.038502 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.041847 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.041884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.057828 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.074292 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.095289 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.109257 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.121755 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.124032 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.137031 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140318 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.140343 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.151056 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.159361 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.163503 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.174194 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.193299 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.204332 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.223451 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.237431 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241938 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.241950 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.250414 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.278307 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.319350 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344176 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344230 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.344255 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.361315 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.437259 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.441611 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446950 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.446978 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.476777 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.503818 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.540532 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.549801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.549836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.549847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.549863 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.550014 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.578980 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.618266 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652665 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652807 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.652819 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.663440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.731706 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.745059 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.754531 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.780218 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:18:42.885328311 +0000 UTC Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.784045 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.820708 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.820723 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:12 crc kubenswrapper[4782]: E0202 10:39:12.820838 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.820904 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: E0202 10:39:12.820936 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.858872 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.875251 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.901171 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961755 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:12 crc kubenswrapper[4782]: I0202 10:39:12.961818 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:12Z","lastTransitionTime":"2026-02-02T10:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.046910 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353" exitCode=0 Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.046954 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.062283 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.063942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.075544 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.082765 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.089281 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.100825 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.123448 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.160362 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.165729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.165885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.166006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.166118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.166225 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.199905 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.238593 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.268980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.269009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.269019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.269032 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.269041 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.277188 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.319568 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.365620 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371636 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371671 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.371681 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.401925 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.440068 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477769 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.477800 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.483563 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.524033 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579886 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579901 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.579911 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.682291 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.780364 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:17:17.649233049 +0000 UTC Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.784075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.820410 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:13 crc kubenswrapper[4782]: E0202 10:39:13.820519 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886384 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.886453 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989196 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:13 crc kubenswrapper[4782]: I0202 10:39:13.989261 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:13Z","lastTransitionTime":"2026-02-02T10:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.053724 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.055379 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335" exitCode=0 Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.055408 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.072508 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092901 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.092996 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.097186 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.111324 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.122573 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.138377 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.149252 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.162908 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.178490 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.191984 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199369 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199377 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.199399 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.205513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.219462 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.235516 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.256810 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.271364 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.289720 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.302999 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.405465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507392 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.507465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609517 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.609546 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.712072 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.780792 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:37:11.15392454 +0000 UTC Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814657 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814706 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.814734 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.820258 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.820357 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:14 crc kubenswrapper[4782]: E0202 10:39:14.820402 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:14 crc kubenswrapper[4782]: E0202 10:39:14.820523 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917273 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:14 crc kubenswrapper[4782]: I0202 10:39:14.917307 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:14Z","lastTransitionTime":"2026-02-02T10:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.019425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.061248 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df" exitCode=0 Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.061296 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.075938 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.092055 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.105826 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.117605 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.121165 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.128697 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.143469 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.153015 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.172012 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.185626 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.199314 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.209886 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223150 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.223161 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.237970 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.252159 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.263598 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.276984 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:15Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.324970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.325023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.325033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.325047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.325056 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.426956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.426995 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.427003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.427018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.427027 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.529109 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631858 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.631914 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733549 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.733586 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.781112 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:49:46.930284221 +0000 UTC Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.820465 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:15 crc kubenswrapper[4782]: E0202 10:39:15.820567 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.836947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.837276 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.837287 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.837300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.837310 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.939960 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.940007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.940020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.940038 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:15 crc kubenswrapper[4782]: I0202 10:39:15.940053 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:15Z","lastTransitionTime":"2026-02-02T10:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042366 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.042405 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.067074 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.069404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.089992 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.102109 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.113680 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.123669 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.133948 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.144958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.144991 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.144999 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.145012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.145035 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.148437 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.162065 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.176745 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.196019 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.209225 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.221609 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.235513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.247322 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.249070 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.262105 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.277986 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:16Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349818 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.349875 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.451822 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554169 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.554196 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656771 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.656800 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678073 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678226 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678202449 +0000 UTC m=+52.562395165 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678314 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678331 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.678339 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678359 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678374 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678391 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678421 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678405154 +0000 UTC m=+52.562597930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678439 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678431495 +0000 UTC m=+52.562624211 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678453 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678478 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678494 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678502 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678494997 +0000 UTC m=+52.562687713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678505 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.678537 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:32.678528238 +0000 UTC m=+52.562721014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759136 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.759161 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.781537 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:08:21.353181638 +0000 UTC Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.820987 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.821040 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.821113 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:16 crc kubenswrapper[4782]: E0202 10:39:16.821168 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861380 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.861461 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964086 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964100 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:16 crc kubenswrapper[4782]: I0202 10:39:16.964109 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:16Z","lastTransitionTime":"2026-02-02T10:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066633 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.066749 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.071875 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.071936 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.096099 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.111965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.112008 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.112022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.112044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.112058 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.113001 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.125656 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.127926 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.130976 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.130999 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.131007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.131020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.131030 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.139984 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.140042 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.140946 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.147449 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150283 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150840 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.150891 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.161406 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165714 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165745 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.165849 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.176445 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.177778 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180104 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.180128 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.189491 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.191559 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.191674 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193107 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193130 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.193139 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.208819 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.222045 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.233361 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.245263 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.257072 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.270169 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.283536 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.295340 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.296495 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.312005 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.323264 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.333788 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.346684 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.358602 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.371507 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.384440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.398268 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.405537 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.448527 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.486836 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501143 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501193 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501208 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.501222 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.508073 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.524682 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.544925 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.558932 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:17Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603710 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603733 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.603743 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.706346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.706686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.706794 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.706911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.707036 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.782406 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:33:41.271003033 +0000 UTC Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.810805 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.820620 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:17 crc kubenswrapper[4782]: E0202 10:39:17.820744 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913680 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:17 crc kubenswrapper[4782]: I0202 10:39:17.913722 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:17Z","lastTransitionTime":"2026-02-02T10:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.016998 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.073621 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119301 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119418 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.119497 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222369 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.222419 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325246 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.325304 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428192 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.428207 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.530896 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.531204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.531365 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.531521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.531685 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634335 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.634370 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.736954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.736993 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.737004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.737018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.737029 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.783766 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:06:37.58175399 +0000 UTC Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.820458 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.820511 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:18 crc kubenswrapper[4782]: E0202 10:39:18.820575 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:18 crc kubenswrapper[4782]: E0202 10:39:18.820628 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839382 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839583 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.839604 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941624 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:18 crc kubenswrapper[4782]: I0202 10:39:18.941662 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:18Z","lastTransitionTime":"2026-02-02T10:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.043441 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.081850 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255" exitCode=0 Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.081937 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.082020 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.096527 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.110222 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.124279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.135854 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145488 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.145601 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.147634 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.162624 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.182436 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.196240 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.208969 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.219410 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.231798 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247483 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.247492 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.251166 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.263031 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.275382 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.292310 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350668 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.350729 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.453153 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555978 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.555987 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.659106 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.761601 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.783923 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:15:05.083429233 +0000 UTC Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.820512 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:19 crc kubenswrapper[4782]: E0202 10:39:19.820709 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864141 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.864152 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966305 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:19 crc kubenswrapper[4782]: I0202 10:39:19.966336 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:19Z","lastTransitionTime":"2026-02-02T10:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.068766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.069109 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.069123 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.069138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.069149 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.088221 4782 generic.go:334] "Generic (PLEG): container finished" podID="1edc5703-bb51-4f8a-9b73-68ba48a40ce8" containerID="1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d" exitCode=0 Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.088275 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerDied","Data":"1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.108977 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.125588 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.138514 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.150380 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.160323 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171464 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.171555 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.175133 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.187567 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.199169 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.219543 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.233441 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.246672 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.261876 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273819 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273833 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.273842 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.277142 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.288076 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.304980 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376190 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376199 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.376222 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478218 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.478227 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580482 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.580617 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686738 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.686786 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.784307 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:06:26.357072927 +0000 UTC Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.789178 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.820537 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.820675 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:20 crc kubenswrapper[4782]: E0202 10:39:20.820763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:20 crc kubenswrapper[4782]: E0202 10:39:20.820952 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.834526 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.845585 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.861376 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.878277 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892534 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.892665 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.893947 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.910602 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.931157 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.946846 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.966197 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.985513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995507 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.995548 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:20Z","lastTransitionTime":"2026-02-02T10:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:20 crc kubenswrapper[4782]: I0202 10:39:20.997452 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.014001 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.030252 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.044495 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.065536 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.096340 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" event={"ID":"1edc5703-bb51-4f8a-9b73-68ba48a40ce8","Type":"ContainerStarted","Data":"2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099449 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.099501 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.110778 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.121587 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.140809 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.161664 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.184161 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.197840 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.201663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.201830 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.201893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.201975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.202031 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.208823 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.219438 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.237799 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.251576 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.263805 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.281505 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.299448 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.304287 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.314269 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.332509 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.406615 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.462072 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn"] Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.462821 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.466363 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.466659 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.481845 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.494515 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509368 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.509932 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.510237 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.524331 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.537814 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.551660 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.565968 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.576309 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.595471 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.609822 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.612089 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.622247 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.628332 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.628373 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxkkz\" (UniqueName: \"kubernetes.io/projected/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-kube-api-access-gxkkz\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.628407 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.628530 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.633282 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.651545 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.665406 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.679035 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.693258 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714841 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.714868 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.729520 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.729564 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxkkz\" (UniqueName: \"kubernetes.io/projected/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-kube-api-access-gxkkz\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.729592 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.729620 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.730432 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.730504 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.735514 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.746432 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxkkz\" (UniqueName: \"kubernetes.io/projected/324c55ff-8d31-4452-bb4e-2a57fbdb23c7-kube-api-access-gxkkz\") pod \"ovnkube-control-plane-749d76644c-x49wn\" (UID: \"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.776573 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.784602 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:42:07.037881664 +0000 UTC Feb 02 10:39:21 crc kubenswrapper[4782]: W0202 10:39:21.789885 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod324c55ff_8d31_4452_bb4e_2a57fbdb23c7.slice/crio-13ec16eab39975cdd9277cfd4d1eb4842e67856b1a2064d0d05ac49ff82a2177 WatchSource:0}: Error finding container 13ec16eab39975cdd9277cfd4d1eb4842e67856b1a2064d0d05ac49ff82a2177: Status 404 returned error can't find the container with id 13ec16eab39975cdd9277cfd4d1eb4842e67856b1a2064d0d05ac49ff82a2177 Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.821787 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:21 crc kubenswrapper[4782]: E0202 10:39:21.822164 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.826057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.826407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.826527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.827865 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.828001 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930373 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:21 crc kubenswrapper[4782]: I0202 10:39:21.930397 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:21Z","lastTransitionTime":"2026-02-02T10:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.032849 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.033846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.034031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.034245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.034414 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.101158 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" event={"ID":"324c55ff-8d31-4452-bb4e-2a57fbdb23c7","Type":"ContainerStarted","Data":"13ec16eab39975cdd9277cfd4d1eb4842e67856b1a2064d0d05ac49ff82a2177"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137763 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137803 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.137829 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244692 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.244740 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.346989 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449441 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.449533 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.537707 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-tv4xc"] Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.538520 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.538621 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552168 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.552190 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.582746 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.631262 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.638880 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.638915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpm5q\" (UniqueName: \"kubernetes.io/projected/4e23db96-3af7-4c29-b00f-5920a9431f01-kube-api-access-gpm5q\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.643539 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.654203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.655393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.655407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.655423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.655449 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.664328 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.678376 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.691613 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.704231 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.719086 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.732145 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.740246 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.740288 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpm5q\" (UniqueName: \"kubernetes.io/projected/4e23db96-3af7-4c29-b00f-5920a9431f01-kube-api-access-gpm5q\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.740473 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.740570 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:23.240549411 +0000 UTC m=+43.124742217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.749315 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758168 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758180 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758193 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.758202 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.761676 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpm5q\" (UniqueName: \"kubernetes.io/projected/4e23db96-3af7-4c29-b00f-5920a9431f01-kube-api-access-gpm5q\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.762776 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.775044 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.785068 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:41:32.462083146 +0000 UTC Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.792397 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.807365 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.820085 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.820147 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.820197 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:22 crc kubenswrapper[4782]: E0202 10:39:22.820256 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.821821 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.842087 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.853686 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:22Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860546 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860592 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860620 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.860632 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:22 crc kubenswrapper[4782]: I0202 10:39:22.962804 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:22Z","lastTransitionTime":"2026-02-02T10:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065450 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065494 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065506 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.065535 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.105976 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/0.log" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.108948 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4" exitCode=1 Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.109203 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.110357 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" event={"ID":"324c55ff-8d31-4452-bb4e-2a57fbdb23c7","Type":"ContainerStarted","Data":"025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.111214 4782 scope.go:117] "RemoveContainer" containerID="6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.125492 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.140365 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.156573 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167957 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167967 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.167993 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.170861 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.183481 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.208155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.221831 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.233430 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.242622 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.245145 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:23 crc kubenswrapper[4782]: E0202 10:39:23.245261 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:23 crc kubenswrapper[4782]: E0202 10:39:23.245309 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:24.245293445 +0000 UTC m=+44.129486161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.254717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.268108 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269812 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.269886 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.279053 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.288674 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.304478 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.317834 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.330659 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.344900 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:23Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372732 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.372808 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.475732 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.581337 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684130 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.684142 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.785183 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:18:18.714988067 +0000 UTC Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786714 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.786738 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.820662 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:23 crc kubenswrapper[4782]: E0202 10:39:23.821049 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.821184 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.889925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.889989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.890002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.890023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.890033 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993684 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993696 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:23 crc kubenswrapper[4782]: I0202 10:39:23.993722 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:23Z","lastTransitionTime":"2026-02-02T10:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.096939 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.115053 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" event={"ID":"324c55ff-8d31-4452-bb4e-2a57fbdb23c7","Type":"ContainerStarted","Data":"1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.116796 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/0.log" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.119336 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.119510 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.122127 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.124165 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.137754 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.151532 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.162933 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.173873 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.182985 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.195391 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199377 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199399 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.199411 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.208095 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.218994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.237050 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.249188 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.258497 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.258600 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.258666 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:26.25863497 +0000 UTC m=+46.142827686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.261255 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.272836 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.281512 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.293619 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301622 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301701 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.301716 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.307135 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.320418 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.331206 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.352043 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.364786 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.377331 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.386757 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.396207 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403738 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.403759 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.408825 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.419305 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.429879 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.449493 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.467803 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.481129 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.497681 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506035 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506089 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.506099 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.510757 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.527345 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.538062 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.552056 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.606777 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:24Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.607994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.608031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.608044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.608061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.608074 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710334 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710355 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.710366 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.785682 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:11:29.534160788 +0000 UTC Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.812996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.813033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.813043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.813057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.813066 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.820349 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.820357 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.820455 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.820485 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.820544 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:24 crc kubenswrapper[4782]: E0202 10:39:24.820731 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915064 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:24 crc kubenswrapper[4782]: I0202 10:39:24.915107 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:24Z","lastTransitionTime":"2026-02-02T10:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017370 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.017383 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119851 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.119877 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.150615 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.168792 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.185693 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.198100 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.207917 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222483 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222511 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.222880 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.235954 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.247088 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.265016 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.278892 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.292343 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.307788 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.320452 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324301 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.324365 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.332936 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.352075 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.362994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.375129 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:25Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.426878 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.528701 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631240 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631287 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.631310 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733583 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.733593 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.786303 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:04:46.937516271 +0000 UTC Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.820791 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:25 crc kubenswrapper[4782]: E0202 10:39:25.820952 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835901 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835912 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.835946 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937790 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937849 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:25 crc kubenswrapper[4782]: I0202 10:39:25.937859 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:25Z","lastTransitionTime":"2026-02-02T10:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040309 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040349 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040359 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040376 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.040387 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.132085 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/1.log" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.132916 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/0.log" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.135252 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" exitCode=1 Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.135296 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.135337 4782 scope.go:117] "RemoveContainer" containerID="6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.136123 4782 scope.go:117] "RemoveContainer" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.136332 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145551 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.145562 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.150728 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.167418 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d829a651d8db6d5abb04fc40c07ae057cc69d7a352de1d104e039e86f64c0b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248476 5945 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:39:22.248620 5945 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 10:39:22.249325 5945 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 10:39:22.249385 5945 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:39:22.249394 5945 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 10:39:22.249407 5945 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 10:39:22.249422 5945 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 10:39:22.249443 5945 factory.go:656] Stopping watch factory\\\\nI0202 10:39:22.249459 5945 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:39:22.249490 5945 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 10:39:22.249499 5945 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:39:22.249504 5945 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 10:39:22.249510 5945 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 10:39:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.181836 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.196118 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.210171 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.221863 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.237528 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.247922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.247963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.247977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.247994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.248007 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.251335 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.261066 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.270044 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.280613 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.280804 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.280871 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:30.280854534 +0000 UTC m=+50.165047350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.282802 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.292865 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.302700 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.313033 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.335324 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.347731 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.350338 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.360194 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:26Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.452682 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554758 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554771 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.554781 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657200 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.657260 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759429 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.759466 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.786870 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:57:00.268116208 +0000 UTC Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.820230 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.820275 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.820365 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.820382 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.820521 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:26 crc kubenswrapper[4782]: E0202 10:39:26.820587 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861718 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.861755 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964399 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:26 crc kubenswrapper[4782]: I0202 10:39:26.964424 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:26Z","lastTransitionTime":"2026-02-02T10:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066767 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.066831 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.138899 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/1.log" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.169914 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272787 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.272840 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375706 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375726 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.375737 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478670 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.478753 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593349 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593363 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.593372 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.594366 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.628820 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634825 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634881 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.634932 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.658975 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666348 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666408 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.666445 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.684673 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693823 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693843 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.693857 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.710883 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715479 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.715530 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.731778 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:27Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.732045 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737257 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.737318 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.787116 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:59:22.24361391 +0000 UTC Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.820566 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:27 crc kubenswrapper[4782]: E0202 10:39:27.820791 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840359 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840449 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.840483 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946420 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946462 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:27 crc kubenswrapper[4782]: I0202 10:39:27.946479 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:27Z","lastTransitionTime":"2026-02-02T10:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050359 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050380 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.050393 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.152728 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255696 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.255814 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.358484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.358944 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.359080 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.359263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.359458 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.462958 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.566200 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670167 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.670182 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.773779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.774202 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.774286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.774378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.774460 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.788276 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:26:17.480699684 +0000 UTC Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.821018 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.821018 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.821074 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:28 crc kubenswrapper[4782]: E0202 10:39:28.821820 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:28 crc kubenswrapper[4782]: E0202 10:39:28.821901 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:28 crc kubenswrapper[4782]: E0202 10:39:28.822008 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878252 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878264 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.878299 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.981922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.981979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.981990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.982010 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:28 crc kubenswrapper[4782]: I0202 10:39:28.982023 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:28Z","lastTransitionTime":"2026-02-02T10:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085613 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085679 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.085699 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188865 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.188878 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291933 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.291950 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395282 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.395373 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.497741 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600397 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600435 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.600468 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.703979 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.788759 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:30:06.360336849 +0000 UTC Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.807370 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.834129 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:29 crc kubenswrapper[4782]: E0202 10:39:29.834276 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.909907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.910198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.910299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.910428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:29 crc kubenswrapper[4782]: I0202 10:39:29.910535 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:29Z","lastTransitionTime":"2026-02-02T10:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013387 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.013396 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116513 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.116523 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.218810 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.320709 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.339103 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.339252 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.339352 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:38.339326795 +0000 UTC m=+58.223519581 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423312 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.423321 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525086 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.525156 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.627143 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729407 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.729434 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.789487 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:39:24.813887658 +0000 UTC Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.794778 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.795762 4782 scope.go:117] "RemoveContainer" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.795931 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.806558 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.816316 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.820592 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.820592 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.820702 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.820745 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.820608 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:30 crc kubenswrapper[4782]: E0202 10:39:30.821059 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831744 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.831801 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.839140 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.853080 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.867276 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.878630 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.896046 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.909069 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.921519 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.933131 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934201 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934223 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.934232 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:30Z","lastTransitionTime":"2026-02-02T10:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.945207 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.959083 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.983445 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:30 crc kubenswrapper[4782]: I0202 10:39:30.996794 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:30Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.006960 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.016995 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.027946 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.036796 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.038827 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.050523 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.061500 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.071332 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.084504 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.095184 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.106528 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.126203 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138751 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.138783 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.139079 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.149173 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.160057 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.169686 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.190535 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.203828 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.215666 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.226717 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241331 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241363 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241373 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241389 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.241399 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.244227 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343785 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.343866 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.379633 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.390379 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.400342 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.412751 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.422866 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.432679 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.444513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.445936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.445969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.445979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.445996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.446009 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.458708 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.472318 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.485419 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.495322 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.508990 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.520964 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.534759 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.544279 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.547941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.547971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.547983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.547998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.548009 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.566671 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.578902 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.589392 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.598958 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:31Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.650232 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.753943 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.754003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.754080 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.754101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.754118 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.790132 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:39:26.043592754 +0000 UTC Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.820676 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:31 crc kubenswrapper[4782]: E0202 10:39:31.820790 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.855908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.855964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.855980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.856001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.856015 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958600 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958625 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:31 crc kubenswrapper[4782]: I0202 10:39:31.958635 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:31Z","lastTransitionTime":"2026-02-02T10:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.001128 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060664 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.060696 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163156 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163169 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.163177 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.265573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.265905 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.266189 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.266319 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.266431 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368864 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.368938 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.470729 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573721 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.573814 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.676585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.676871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.676939 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.676999 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.677053 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.762844 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.762928 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.762957 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.762981 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.763001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763049 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763022319 +0000 UTC m=+84.647215025 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763092 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763133 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763139 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763123812 +0000 UTC m=+84.647316518 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763171 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763164273 +0000 UTC m=+84.647356989 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763257 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763270 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763280 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763302 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763296167 +0000 UTC m=+84.647488883 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763347 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763356 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763363 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.763381 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:04.763375579 +0000 UTC m=+84.647568295 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.778991 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.779023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.779031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.779044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.779052 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.791119 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:00:15.446431563 +0000 UTC Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.820581 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.820619 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.820581 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.820700 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.820769 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:32 crc kubenswrapper[4782]: E0202 10:39:32.820815 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881722 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.881760 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984883 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:32 crc kubenswrapper[4782]: I0202 10:39:32.984907 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:32Z","lastTransitionTime":"2026-02-02T10:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.087820 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189830 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189878 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.189891 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292567 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292669 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.292698 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.395151 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497351 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.497383 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599839 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599870 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.599903 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.701942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.791298 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:22:55.988282642 +0000 UTC Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804409 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.804427 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.820677 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:33 crc kubenswrapper[4782]: E0202 10:39:33.820804 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906367 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906379 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:33 crc kubenswrapper[4782]: I0202 10:39:33.906406 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:33Z","lastTransitionTime":"2026-02-02T10:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.008845 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111254 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111266 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.111300 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213780 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.213810 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316887 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316896 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.316917 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.418555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.418877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.418986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.419076 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.419168 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.521096 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623487 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623497 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.623523 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.726130 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.791840 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:37:30.298712448 +0000 UTC Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.820449 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.820455 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:34 crc kubenswrapper[4782]: E0202 10:39:34.820867 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.820513 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:34 crc kubenswrapper[4782]: E0202 10:39:34.821202 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:34 crc kubenswrapper[4782]: E0202 10:39:34.821053 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828386 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.828444 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931237 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:34 crc kubenswrapper[4782]: I0202 10:39:34.931250 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:34Z","lastTransitionTime":"2026-02-02T10:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033892 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.033931 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.135995 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238811 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.238837 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340858 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.340867 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.442965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.442996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.443005 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.443018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.443028 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545665 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545680 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.545707 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648384 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.648425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.751543 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.792595 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:55:18.787932906 +0000 UTC Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.820606 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:35 crc kubenswrapper[4782]: E0202 10:39:35.820792 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854157 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.854173 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:35 crc kubenswrapper[4782]: I0202 10:39:35.957107 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:35Z","lastTransitionTime":"2026-02-02T10:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059319 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059602 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.059959 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163318 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163331 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.163377 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266664 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266747 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.266758 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370305 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370369 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370388 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.370400 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.479260 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.582600 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684915 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.684957 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788577 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.788620 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.792793 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:41:57.526518919 +0000 UTC Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.820830 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.820850 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.820897 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:36 crc kubenswrapper[4782]: E0202 10:39:36.820967 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:36 crc kubenswrapper[4782]: E0202 10:39:36.821063 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:36 crc kubenswrapper[4782]: E0202 10:39:36.821184 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891941 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.891974 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995169 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:36 crc kubenswrapper[4782]: I0202 10:39:36.995880 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:36Z","lastTransitionTime":"2026-02-02T10:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.098966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.099018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.099031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.099052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.099068 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202395 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202405 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.202435 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.305340 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408563 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.408640 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512194 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512257 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.512288 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614537 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.614556 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.716552 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.793198 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:38:21.279790788 +0000 UTC Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819122 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.819148 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.820196 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:37 crc kubenswrapper[4782]: E0202 10:39:37.820291 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921309 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921372 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:37 crc kubenswrapper[4782]: I0202 10:39:37.921380 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:37Z","lastTransitionTime":"2026-02-02T10:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015417 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.015455 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.029591 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.033462 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.044382 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047853 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.047916 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.060455 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064111 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.064133 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.077800 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.085986 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.099314 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:38Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.099461 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.101052 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203755 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.203838 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.306306 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408722 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.408769 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.422245 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.422507 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.422688 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:39:54.422628627 +0000 UTC m=+74.306821533 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.511935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.512013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.512029 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.512053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.512073 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.615875 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718751 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718765 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.718779 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.793948 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:24:00.802416398 +0000 UTC Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.820763 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.820820 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.820904 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.820976 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821033 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:38 crc kubenswrapper[4782]: E0202 10:39:38.821122 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821274 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.821293 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924615 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:38 crc kubenswrapper[4782]: I0202 10:39:38.924717 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:38Z","lastTransitionTime":"2026-02-02T10:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027640 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027666 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027681 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.027690 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130751 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130818 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.130853 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233007 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233035 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233048 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.233071 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336492 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.336524 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438785 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438794 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.438822 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.542259 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.644998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.645075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.645093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.645122 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.645137 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748372 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.748383 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.794130 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:41:35.690558828 +0000 UTC Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.820673 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:39 crc kubenswrapper[4782]: E0202 10:39:39.820794 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850888 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.850908 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.953955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.954225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.954241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.954260 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:39 crc kubenswrapper[4782]: I0202 10:39:39.954276 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:39Z","lastTransitionTime":"2026-02-02T10:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057355 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057421 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.057448 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160625 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.160634 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263379 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263431 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.263456 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.366878 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470340 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470443 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.470471 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574838 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.574927 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679563 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.679689 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782707 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.782718 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.795279 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:30:09.217440259 +0000 UTC Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.820153 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.820182 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.820199 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:40 crc kubenswrapper[4782]: E0202 10:39:40.820280 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:40 crc kubenswrapper[4782]: E0202 10:39:40.820446 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:40 crc kubenswrapper[4782]: E0202 10:39:40.820575 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.845802 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.864295 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.878449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888336 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.888367 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.896516 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.909060 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.925431 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.943051 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.959707 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.987588 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:40Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990465 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:40 crc kubenswrapper[4782]: I0202 10:39:40.990476 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:40Z","lastTransitionTime":"2026-02-02T10:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.002886 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.019638 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.034017 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.049902 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.066456 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.082351 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.093569 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.098567 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.117904 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.129212 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:41Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196896 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196978 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.196987 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299096 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299109 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.299168 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401377 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401388 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401403 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.401414 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.503920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.504001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.504034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.504054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.504067 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606944 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606973 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.606993 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710486 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710534 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710549 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.710559 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.796219 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:52:19.632419508 +0000 UTC Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.813077 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.820929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:41 crc kubenswrapper[4782]: E0202 10:39:41.821064 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:41 crc kubenswrapper[4782]: I0202 10:39:41.916426 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:41Z","lastTransitionTime":"2026-02-02T10:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.006532 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019575 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019587 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.019620 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.025950 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.046793 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.063105 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.087669 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.103698 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.120565 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.122247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.122411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.122871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.123268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.123676 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.134724 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.159899 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.181918 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.198530 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.211916 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.224861 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226971 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.226993 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.236480 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.253197 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.265819 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.278058 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.294350 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.311420 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:42Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.329515 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432224 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.432317 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.534978 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.638258 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.740991 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.741024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.741033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.741045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.741054 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.797287 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:13:38.17618963 +0000 UTC Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.820234 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.820265 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:42 crc kubenswrapper[4782]: E0202 10:39:42.820375 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.820425 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:42 crc kubenswrapper[4782]: E0202 10:39:42.820500 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:42 crc kubenswrapper[4782]: E0202 10:39:42.820579 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.821348 4782 scope.go:117] "RemoveContainer" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.845987 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.846040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.846053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.846072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.846084 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949503 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:42 crc kubenswrapper[4782]: I0202 10:39:42.949537 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:42Z","lastTransitionTime":"2026-02-02T10:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.051542 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.153969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.154012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.154021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.154036 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.154047 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.191768 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/1.log" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.195507 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.196287 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.209691 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.220810 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.230334 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256679 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256886 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256923 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256950 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.256961 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.272732 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.287452 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.298173 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.321930 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.339220 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.355420 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359405 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.359465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.369810 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.382278 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.393390 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.406439 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.415733 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.424935 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.436554 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.445994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:43Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461895 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461927 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.461939 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564163 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564190 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.564199 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666601 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.666661 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.769137 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.797878 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:42:13.845368485 +0000 UTC Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.820112 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:43 crc kubenswrapper[4782]: E0202 10:39:43.820248 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871543 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.871585 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.974977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.975016 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.975025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.975039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:43 crc kubenswrapper[4782]: I0202 10:39:43.975050 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:43Z","lastTransitionTime":"2026-02-02T10:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077460 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077528 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.077551 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.179711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.179982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.179990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.180003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.180012 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.199532 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/2.log" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.199966 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/1.log" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.202520 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" exitCode=1 Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.202556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.202594 4782 scope.go:117] "RemoveContainer" containerID="648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.203432 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:39:44 crc kubenswrapper[4782]: E0202 10:39:44.204079 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.220312 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.237160 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648f8ec38e8c54dd9feeec43b13f9ae38917d67d8be850ecfb2bcbd51b68a592\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:25Z\\\",\\\"message\\\":\\\"35118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.995439 6179 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0202 10:39:24.994996 6179 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-8lwfx in node crc\\\\nI0202 10:39:24.995506 6179 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-8lwfx after 0 failed attempt(s)\\\\nF0202 10:39:24.994899 6179 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webho\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.249733 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.264690 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.276122 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281453 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281467 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.281478 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.287920 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.299429 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.311614 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.322353 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.332914 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.344876 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.357890 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.372298 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.382484 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383721 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383732 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383756 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.383769 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.394368 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.411866 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.427197 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.438071 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:44Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.485947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.485975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.485983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.486011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.486020 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588889 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.588919 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.691407 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794095 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794119 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.794129 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.798373 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:49:25.390452457 +0000 UTC Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.820679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:44 crc kubenswrapper[4782]: E0202 10:39:44.820812 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.820878 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.820679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:44 crc kubenswrapper[4782]: E0202 10:39:44.821004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:44 crc kubenswrapper[4782]: E0202 10:39:44.821060 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.895980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.896019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.896032 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.896154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.896172 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998135 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:44 crc kubenswrapper[4782]: I0202 10:39:44.998148 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:44Z","lastTransitionTime":"2026-02-02T10:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100333 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100885 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.100980 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.203933 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.203968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.203989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.204016 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.204029 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.207378 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/2.log" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.210838 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:39:45 crc kubenswrapper[4782]: E0202 10:39:45.211036 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.225481 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.239690 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.255280 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.278829 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.291294 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.304541 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.306125 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.324794 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.347934 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.362373 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.375057 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.390870 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.403862 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408566 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.408696 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.417618 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.438626 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.455690 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.470079 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.483498 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.496133 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:45Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511235 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.511246 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.613916 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717832 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.717971 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.799387 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:00:14.504754402 +0000 UTC Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820153 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820354 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820392 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820404 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.820430 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:45 crc kubenswrapper[4782]: E0202 10:39:45.820816 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922082 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922142 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:45 crc kubenswrapper[4782]: I0202 10:39:45.922165 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:45Z","lastTransitionTime":"2026-02-02T10:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.024512 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.024789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.024911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.025003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.025124 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.128685 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.231894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.232265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.232385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.232475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.232554 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335232 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335310 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.335363 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.438300 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.438657 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.438761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.438867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.439283 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.541643 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644597 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644691 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.644703 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747110 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747153 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.747193 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.799569 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:01:15.408211465 +0000 UTC Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.820946 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:46 crc kubenswrapper[4782]: E0202 10:39:46.821079 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.821150 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:46 crc kubenswrapper[4782]: E0202 10:39:46.821270 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.821531 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:46 crc kubenswrapper[4782]: E0202 10:39:46.821819 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.832830 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.849481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.849793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.849917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.850069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.850189 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952747 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952769 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:46 crc kubenswrapper[4782]: I0202 10:39:46.952784 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:46Z","lastTransitionTime":"2026-02-02T10:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.055964 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.158784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.159137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.159254 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.159366 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.159474 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261340 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261349 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.261375 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.363699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.364058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.364138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.364229 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.364333 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466606 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466737 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.466870 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569833 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569865 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.569882 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672772 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.672813 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.775722 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.775981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.776060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.776129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.776207 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.800218 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:29:50.75133862 +0000 UTC Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.820771 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:47 crc kubenswrapper[4782]: E0202 10:39:47.820903 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878196 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.878630 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981576 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:47 crc kubenswrapper[4782]: I0202 10:39:47.981614 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:47Z","lastTransitionTime":"2026-02-02T10:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.084141 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187306 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187480 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.187544 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.292671 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.395075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.429725 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.443919 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447843 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447854 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.447880 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.460908 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.465784 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.465916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.466001 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.466082 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.466145 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.480029 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483743 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.483772 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.496983 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.500662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.500801 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.500879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.500953 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.501019 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.513507 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:48Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.513628 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.514981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.515024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.515036 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.515055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.515067 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617221 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617270 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.617295 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.718861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.719103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.719175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.719255 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.719324 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.801133 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:22:54.873496645 +0000 UTC Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.820184 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.820255 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.820441 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.820764 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.820654 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:48 crc kubenswrapper[4782]: E0202 10:39:48.821041 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821404 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821425 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821433 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.821452 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.923937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.923997 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.924042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.924060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:48 crc kubenswrapper[4782]: I0202 10:39:48.924070 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:48Z","lastTransitionTime":"2026-02-02T10:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026238 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.026319 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.128959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.128994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.129003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.129016 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.129025 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.231465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335046 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.335131 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437412 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437495 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.437531 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540478 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540547 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.540563 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.644714 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747862 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.747997 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.801790 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:23:48.074524929 +0000 UTC Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.820195 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:49 crc kubenswrapper[4782]: E0202 10:39:49.820313 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.850957 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.851010 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.851022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.851038 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.851050 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:49 crc kubenswrapper[4782]: I0202 10:39:49.953070 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:49Z","lastTransitionTime":"2026-02-02T10:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055206 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.055261 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.157860 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259988 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.259997 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.362917 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464744 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.464809 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566863 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.566895 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669669 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669703 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.669714 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.771955 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.802233 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:19:45.252441173 +0000 UTC Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.820611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.820654 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.820738 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:50 crc kubenswrapper[4782]: E0202 10:39:50.821123 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:50 crc kubenswrapper[4782]: E0202 10:39:50.821012 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:50 crc kubenswrapper[4782]: E0202 10:39:50.821254 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.838555 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.850167 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.863517 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875485 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.875497 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.881435 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.894256 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.907915 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.938037 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976782 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976792 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.976815 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:50Z","lastTransitionTime":"2026-02-02T10:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:50 crc kubenswrapper[4782]: I0202 10:39:50.997095 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:50Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.010002 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.020572 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.031785 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.043700 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.060349 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.073109 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.078952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.078989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.079002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.079019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.079029 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.147061 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.169089 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181029 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181073 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181083 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181098 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.181108 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.182484 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.193958 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.211432 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:51Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.283962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.283998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.284009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.284024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.284034 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.386961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.387027 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.387045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.387074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.387095 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.490501 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.592936 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.592994 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.593004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.593018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.593027 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696299 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696397 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.696411 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799068 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799111 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.799141 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.803190 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:05:49.677161228 +0000 UTC Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.820418 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:51 crc kubenswrapper[4782]: E0202 10:39:51.820533 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.901893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.901989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.902006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.902024 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:51 crc kubenswrapper[4782]: I0202 10:39:51.902041 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:51Z","lastTransitionTime":"2026-02-02T10:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005649 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.005660 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.107954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.108002 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.108013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.108028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.108039 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210636 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.210684 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313620 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313673 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.313697 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416598 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416650 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.416681 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519341 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.519368 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621476 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.621505 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.723993 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.724034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.724046 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.724062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.724073 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.804073 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:05:59.754969304 +0000 UTC Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.820904 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.820924 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:52 crc kubenswrapper[4782]: E0202 10:39:52.821106 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.821198 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:52 crc kubenswrapper[4782]: E0202 10:39:52.821300 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:52 crc kubenswrapper[4782]: E0202 10:39:52.821393 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830201 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830251 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.830294 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936258 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936301 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:52 crc kubenswrapper[4782]: I0202 10:39:52.936322 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:52Z","lastTransitionTime":"2026-02-02T10:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038791 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038812 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.038830 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141442 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.141482 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244199 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244227 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.244242 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346752 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346773 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.346782 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.448965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.449004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.449013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.449027 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.449054 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551260 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.551269 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653629 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.653652 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.756382 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.804861 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:30:34.6657645 +0000 UTC Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.820243 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:53 crc kubenswrapper[4782]: E0202 10:39:53.820371 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858746 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.858772 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961134 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961174 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961185 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961201 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:53 crc kubenswrapper[4782]: I0202 10:39:53.961212 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:53Z","lastTransitionTime":"2026-02-02T10:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064960 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.064991 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167093 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167138 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.167164 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270538 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.270621 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.372963 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476267 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.476299 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.491851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.492048 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.492154 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:40:26.492130179 +0000 UTC m=+106.376322895 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580294 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.580328 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683490 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683505 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.683514 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785677 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.785996 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.805457 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:32:11.050291067 +0000 UTC Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.820801 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.820827 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.820814 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.820924 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.821004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:54 crc kubenswrapper[4782]: E0202 10:39:54.821071 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891684 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891721 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.891752 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994420 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:54 crc kubenswrapper[4782]: I0202 10:39:54.994478 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:54Z","lastTransitionTime":"2026-02-02T10:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097814 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097869 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097884 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.097917 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201814 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201840 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.201854 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305367 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305402 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.305421 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409072 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409130 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.409179 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.512687 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.513193 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.513285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.513414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.513513 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.616882 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.617379 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.617484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.617593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.617732 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721597 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721608 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.721654 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.805581 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:26:23.367748959 +0000 UTC Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.821140 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:55 crc kubenswrapper[4782]: E0202 10:39:55.821321 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.824909 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:55 crc kubenswrapper[4782]: I0202 10:39:55.928072 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:55Z","lastTransitionTime":"2026-02-02T10:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031071 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031143 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.031154 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134270 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.134359 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237277 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.237305 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339882 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.339914 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443189 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.443203 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546312 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546363 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.546409 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.650506 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.650953 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.651066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.651160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.651265 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.754619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.755578 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.755706 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.755808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.755915 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.806175 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:06:02.522544272 +0000 UTC Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.820775 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.820827 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.820858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:56 crc kubenswrapper[4782]: E0202 10:39:56.820931 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:56 crc kubenswrapper[4782]: E0202 10:39:56.821031 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:56 crc kubenswrapper[4782]: E0202 10:39:56.821109 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859004 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859340 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.859773 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966530 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966546 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:56 crc kubenswrapper[4782]: I0202 10:39:56.966555 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:56Z","lastTransitionTime":"2026-02-02T10:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.068968 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.171790 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273831 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273877 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.273905 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378701 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.378733 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482511 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482542 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.482556 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586599 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.586611 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.689968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.690028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.690046 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.690071 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.690090 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792232 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.792246 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.806555 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:25:16.886851728 +0000 UTC Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.820908 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:57 crc kubenswrapper[4782]: E0202 10:39:57.821084 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895248 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.895260 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999010 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999091 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:57 crc kubenswrapper[4782]: I0202 10:39:57.999102 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:57Z","lastTransitionTime":"2026-02-02T10:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102218 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102375 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.102397 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205760 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.205797 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309206 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309259 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309272 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.309302 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412540 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.412567 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515564 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.515595 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579417 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.579479 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.592054 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595082 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.595104 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.606850 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610149 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610223 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.610232 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.622567 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.626481 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.640167 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644064 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644078 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.644087 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.659371 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:39:58Z is after 2025-08-24T17:21:41Z" Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.659592 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.661858 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.764971 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.807178 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:52:40.861644665 +0000 UTC Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.820606 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.820901 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.821017 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.821147 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.821249 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:39:58 crc kubenswrapper[4782]: E0202 10:39:58.821300 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868557 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.868576 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972123 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:58 crc kubenswrapper[4782]: I0202 10:39:58.972138 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:58Z","lastTransitionTime":"2026-02-02T10:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.074990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.075033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.075042 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.075061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.075072 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178825 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178893 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178930 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.178948 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282262 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.282356 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.386161 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491183 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491193 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491213 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.491224 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.594211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.594675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.594798 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.594916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.595004 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.698423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.698848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.698919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.699070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.699171 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.802527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.802809 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.802925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.803017 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.803103 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.807869 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:00:46.031524147 +0000 UTC Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.820162 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:39:59 crc kubenswrapper[4782]: E0202 10:39:59.820330 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906453 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:39:59 crc kubenswrapper[4782]: I0202 10:39:59.906489 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:39:59Z","lastTransitionTime":"2026-02-02T10:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010228 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.010280 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113152 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.113418 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.216604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.217049 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.217129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.217216 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.217296 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.320736 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.321121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.321293 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.321381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.321455 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424607 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424739 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.424752 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528683 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528730 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.528818 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632463 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632564 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632592 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.632611 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736269 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.736283 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.809347 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:25:54.541697627 +0000 UTC Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.821015 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:00 crc kubenswrapper[4782]: E0202 10:40:00.821223 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.821525 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.822028 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:00 crc kubenswrapper[4782]: E0202 10:40:00.822031 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:00 crc kubenswrapper[4782]: E0202 10:40:00.822196 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.823092 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:40:00 crc kubenswrapper[4782]: E0202 10:40:00.823492 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.839711 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840610 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840835 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.840912 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.858154 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.874309 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.887455 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.904977 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.919168 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.932268 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.943672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.943925 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.944011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.944113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.944265 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:00Z","lastTransitionTime":"2026-02-02T10:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.948849 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.963333 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.984065 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:00 crc kubenswrapper[4782]: I0202 10:40:00.999149 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:00Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.017155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.033430 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047186 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047244 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047258 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047295 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.047339 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.054023 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.070872 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.088817 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.105266 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.123519 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.146603 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.150182 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252663 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252685 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.252693 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.260527 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/0.log" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.260579 4782 generic.go:334] "Generic (PLEG): container finished" podID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" containerID="9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937" exitCode=1 Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.260613 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerDied","Data":"9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.261043 4782 scope.go:117] "RemoveContainer" containerID="9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.283231 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.295555 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.308910 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.321573 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.334463 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.345666 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355683 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.355729 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.357888 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.370707 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.381156 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.389449 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.399513 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.411713 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.423911 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.440308 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.451440 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458086 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.458150 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.474664 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.489748 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.505018 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.517003 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:01Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.560973 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.561026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.561040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.561060 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.561073 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.663996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.664045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.664054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.664069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.664080 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766787 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766827 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766851 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.766861 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.810431 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:45:12.209195467 +0000 UTC Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.820773 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:01 crc kubenswrapper[4782]: E0202 10:40:01.820902 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872108 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.872153 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975354 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:01 crc kubenswrapper[4782]: I0202 10:40:01.975364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:01Z","lastTransitionTime":"2026-02-02T10:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077561 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077570 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077584 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.077595 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180257 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180269 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180286 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.180298 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.265461 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/0.log" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.265535 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.278091 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282238 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282256 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.282268 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.290484 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.302943 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.316967 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.330722 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.341245 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.354621 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.365930 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.378946 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384389 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.384504 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.404593 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.418612 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.431726 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.441760 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.452782 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.464347 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.480161 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487482 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.487722 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.494601 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.507333 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.529045 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:02Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591416 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591457 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.591468 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.694200 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.796961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.796990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.797000 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.797013 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.797023 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.810922 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:29:14.163688219 +0000 UTC Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.820629 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:02 crc kubenswrapper[4782]: E0202 10:40:02.820856 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.820973 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:02 crc kubenswrapper[4782]: E0202 10:40:02.821363 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.821567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:02 crc kubenswrapper[4782]: E0202 10:40:02.821944 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899214 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899231 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:02 crc kubenswrapper[4782]: I0202 10:40:02.899245 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:02Z","lastTransitionTime":"2026-02-02T10:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001375 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001411 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001423 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001439 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.001450 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103700 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103745 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103756 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.103788 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205463 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.205527 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309218 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309504 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.309807 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432142 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.432181 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535253 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535320 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.535346 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639307 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.639488 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744077 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.744134 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.811945 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:31:54.871479391 +0000 UTC Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.820316 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:03 crc kubenswrapper[4782]: E0202 10:40:03.820547 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.847770 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.848149 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.848250 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.848344 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.848416 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951675 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:03 crc kubenswrapper[4782]: I0202 10:40:03.951683 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:03Z","lastTransitionTime":"2026-02-02T10:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054329 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.054371 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157756 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157818 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.157854 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261211 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.261226 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.363774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.364103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.364172 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.364292 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.364357 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.466872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.467500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.467598 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.467735 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.467837 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570343 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570390 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570402 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570420 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.570432 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673117 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673358 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673569 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.673859 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775923 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.775967 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.812422 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:30:49.408254869 +0000 UTC Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.817897 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.818015 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.818048 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.818077 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.818099 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818215 4782 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818259 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.818246724 +0000 UTC m=+148.702439440 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818408 4782 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818513 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818548 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818560 4782 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818433 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818595 4782 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818602 4782 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818474 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.81846804 +0000 UTC m=+148.702660756 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818630 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.818620825 +0000 UTC m=+148.702813541 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818670 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.818635695 +0000 UTC m=+148.702828411 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.818684 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:08.818678636 +0000 UTC m=+148.702871352 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.821012 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.821091 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.821014 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.821336 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.821209 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:04 crc kubenswrapper[4782]: E0202 10:40:04.821571 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.878789 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.879043 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.879133 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.879225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.879319 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982508 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982526 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:04 crc kubenswrapper[4782]: I0202 10:40:04.982538 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:04Z","lastTransitionTime":"2026-02-02T10:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.085193 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187870 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.187996 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290090 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290139 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290157 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.290168 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393187 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393200 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393218 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.393258 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495466 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.495505 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597854 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.597873 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700631 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700724 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700747 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.700799 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803056 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803099 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803109 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803124 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.803134 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.813488 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:41:09.777358582 +0000 UTC Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.820966 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:05 crc kubenswrapper[4782]: E0202 10:40:05.821158 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906436 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:05 crc kubenswrapper[4782]: I0202 10:40:05.906520 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:05Z","lastTransitionTime":"2026-02-02T10:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.008954 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113341 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113452 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.113476 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216285 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216590 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216733 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.216955 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.319852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.320311 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.320415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.320517 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.320831 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.424754 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.425280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.425406 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.425533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.425629 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.529203 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.529704 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.529845 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.529945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.530018 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.633445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.633788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.633917 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.634020 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.634126 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737279 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737339 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.737348 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.814174 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 12:47:22.92528083 +0000 UTC Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.820568 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:06 crc kubenswrapper[4782]: E0202 10:40:06.820772 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.820940 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.820567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:06 crc kubenswrapper[4782]: E0202 10:40:06.821303 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:06 crc kubenswrapper[4782]: E0202 10:40:06.821321 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.840879 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.840935 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.840949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.841015 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.841031 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944014 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944070 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944088 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:06 crc kubenswrapper[4782]: I0202 10:40:06.944156 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:06Z","lastTransitionTime":"2026-02-02T10:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046781 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046873 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.046942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149910 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149954 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149965 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149984 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.149997 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253080 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253116 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253146 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.253159 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355748 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.355791 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458740 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458768 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458791 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.458800 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562416 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562425 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.562448 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664852 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664887 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664907 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664926 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.664946 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766723 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766742 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766766 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.766777 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.814968 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:05:01.502008235 +0000 UTC Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.820258 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:07 crc kubenswrapper[4782]: E0202 10:40:07.820395 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869412 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869424 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.869455 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972075 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:07 crc kubenswrapper[4782]: I0202 10:40:07.972143 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:07Z","lastTransitionTime":"2026-02-02T10:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.074534 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176918 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.176948 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.280668 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.281139 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.281215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.281891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.281983 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385519 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385588 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.385621 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489176 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489265 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.489279 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593191 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.593206 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.696927 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.696988 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.697003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.697033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.697047 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786134 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786162 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.786175 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.802372 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806542 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806668 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.806680 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.815525 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:26:08.861555919 +0000 UTC Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.820791 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.820992 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.821062 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.821249 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.821303 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.821566 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.821944 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.827788 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.828018 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.828118 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.828217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.828296 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.874236 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879571 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.879580 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.895274 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902435 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902537 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.902581 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.917549 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:08Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:08 crc kubenswrapper[4782]: E0202 10:40:08.917738 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919498 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919529 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:08 crc kubenswrapper[4782]: I0202 10:40:08.919541 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:08Z","lastTransitionTime":"2026-02-02T10:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021948 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021959 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021977 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.021987 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124864 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.124939 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228535 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228622 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.228843 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331556 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331616 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.331676 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434322 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434334 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434352 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.434364 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.536716 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.536998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.537022 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.537040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.537049 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639545 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.639556 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743091 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743155 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.743183 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.816962 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:32:25.974111583 +0000 UTC Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.820360 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:09 crc kubenswrapper[4782]: E0202 10:40:09.820763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.846720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.847074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.847196 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.847350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.847597 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951651 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951697 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951709 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951731 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:09 crc kubenswrapper[4782]: I0202 10:40:09.951747 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:09Z","lastTransitionTime":"2026-02-02T10:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.054947 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.054986 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.054996 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.055011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.055023 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.157867 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.158316 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.158401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.158472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.158543 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261763 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261808 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.261826 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.364720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.365062 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.365303 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.365475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.365621 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468454 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468591 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468692 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.468852 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.571908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.571961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.571975 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.571997 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.572015 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674805 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674921 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674934 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.674943 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778448 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778489 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778501 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.778530 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.818327 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:57:51.324902372 +0000 UTC Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.820739 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:10 crc kubenswrapper[4782]: E0202 10:40:10.820893 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.820964 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:10 crc kubenswrapper[4782]: E0202 10:40:10.821044 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.822533 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:10 crc kubenswrapper[4782]: E0202 10:40:10.823412 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.840664 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.854722 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.868410 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881626 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881635 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.881689 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.885737 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.896404 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.910759 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.923678 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.934626 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.955897 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.970718 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983564 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.983781 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:10Z","lastTransitionTime":"2026-02-02T10:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.985938 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:10 crc kubenswrapper[4782]: I0202 10:40:10.998155 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:10Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.020756 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.031896 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.048506 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.065828 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.079789 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087204 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087278 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.087343 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.099159 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.115595 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:11Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.190368 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292676 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292689 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292708 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.292720 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394850 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394904 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.394949 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497281 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497330 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.497375 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600028 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600069 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.600110 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703759 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.703786 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806249 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806258 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.806338 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.819560 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:59:26.955982449 +0000 UTC Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.820664 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:11 crc kubenswrapper[4782]: E0202 10:40:11.821112 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.821534 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.909826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.910437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.910451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.910471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:11 crc kubenswrapper[4782]: I0202 10:40:11.910485 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:11Z","lastTransitionTime":"2026-02-02T10:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015280 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015317 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.015336 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.120239 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223871 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223949 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223972 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.223990 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.300516 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/2.log" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.304245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.304800 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.325672 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326513 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326559 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.326582 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.351760 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.365950 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.392371 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.420169 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429296 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429361 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429379 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.429392 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.442187 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.460012 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.476394 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.498382 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.511108 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.525939 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531717 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531754 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.531787 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.542987 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.557549 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.571328 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.584077 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.601194 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.614613 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.630032 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637672 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637690 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637734 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.637749 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.645427 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:12Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740577 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740621 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740632 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740681 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.740698 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.823635 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:12 crc kubenswrapper[4782]: E0202 10:40:12.823933 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.824193 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.824172 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:34:38.557072502 +0000 UTC Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.824224 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:12 crc kubenswrapper[4782]: E0202 10:40:12.824487 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:12 crc kubenswrapper[4782]: E0202 10:40:12.824705 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843688 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.843708 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947451 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:12 crc kubenswrapper[4782]: I0202 10:40:12.947463 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:12Z","lastTransitionTime":"2026-02-02T10:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049732 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049763 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049772 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049783 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.049794 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152520 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.152568 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.254998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.255040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.255050 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.255066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.255076 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.311026 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/3.log" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.312441 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/2.log" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.317012 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" exitCode=1 Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.317055 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.317121 4782 scope.go:117] "RemoveContainer" containerID="c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.318245 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:13 crc kubenswrapper[4782]: E0202 10:40:13.318434 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.338295 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.353185 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357810 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.357880 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.369195 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.383771 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.398175 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.412160 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.430404 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.444003 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.456860 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464727 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464777 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464807 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.464838 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.479434 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.496587 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.514790 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.530173 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.543678 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.557589 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567857 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567878 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.567891 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.574949 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.588916 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.613030 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.636742 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c03e79e9d8e50ea1ff1ec473550eb74b39c5ba1a114e03a38c7c6ceb1ca6094a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:39:43Z\\\",\\\"message\\\":\\\"{},},Conditions:[]Condition{},},}\\\\nI0202 10:39:43.908257 6438 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI0202 10:39:43.908265 6438 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0202 10:39:43.908275 6438 services_controller.go:444] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0202 10:39:43.908281 6438 services_controller.go:445] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0202 10:39:43.908156 6438 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller ini\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:13Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0202 10:40:12.792477 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.792747 6839 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793003 6839 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793147 6839 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793807 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793810 6839 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:40:12.793956 6839 factory.go:656] Stopping watch factory\\\\nI0202 10:40:12.793991 6839 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:40:12.815740 6839 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0202 10:40:12.815893 6839 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0202 10:40:12.815971 6839 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:40:12.816011 6839 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:40:12.816108 6839 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:40:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:13Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671229 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.671357 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774427 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.774441 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.820496 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:13 crc kubenswrapper[4782]: E0202 10:40:13.820683 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.824433 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:23:51.531997056 +0000 UTC Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877253 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877401 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.877465 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981150 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981163 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981184 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:13 crc kubenswrapper[4782]: I0202 10:40:13.981198 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:13Z","lastTransitionTime":"2026-02-02T10:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084574 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084658 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084678 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.084715 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187842 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.187970 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291182 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.291194 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.322488 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/3.log" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.334735 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:14 crc kubenswrapper[4782]: E0202 10:40:14.335675 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.350071 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.364015 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.375068 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.387317 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.393156 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.412728 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.428657 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.442836 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.453526 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.469733 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:13Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0202 10:40:12.792477 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.792747 6839 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793003 6839 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793147 6839 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793807 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793810 6839 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:40:12.793956 6839 factory.go:656] Stopping watch factory\\\\nI0202 10:40:12.793991 6839 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:40:12.815740 6839 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0202 10:40:12.815893 6839 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0202 10:40:12.815971 6839 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:40:12.816011 6839 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:40:12.816108 6839 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:40:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.478823 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.489215 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495732 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495791 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495806 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.495816 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.502143 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.513429 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.523847 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.535259 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.548569 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.562760 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.574416 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.587720 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:14Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.598686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.598868 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.598963 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.599030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.599087 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702384 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702437 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702450 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702470 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.702482 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805462 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805802 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805908 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.805979 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.821887 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:14 crc kubenswrapper[4782]: E0202 10:40:14.822022 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.822093 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:14 crc kubenswrapper[4782]: E0202 10:40:14.822141 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.822337 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:14 crc kubenswrapper[4782]: E0202 10:40:14.822405 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.824726 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:46:56.953021749 +0000 UTC Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907958 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907969 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:14 crc kubenswrapper[4782]: I0202 10:40:14.907994 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:14Z","lastTransitionTime":"2026-02-02T10:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010774 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010846 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.010858 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113762 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113823 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.113858 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.218181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.218527 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.218686 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.218890 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.219014 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321522 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321560 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321572 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321586 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.321598 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423813 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423849 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423859 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.423881 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581098 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581143 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581159 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.581171 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683352 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683617 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683719 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683794 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.683874 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785894 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785939 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785952 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.785966 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.820609 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:15 crc kubenswrapper[4782]: E0202 10:40:15.820763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.825776 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:40:55.111770736 +0000 UTC Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888133 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888195 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888246 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.888258 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992149 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992174 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:15 crc kubenswrapper[4782]: I0202 10:40:15.992183 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:15Z","lastTransitionTime":"2026-02-02T10:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096051 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096103 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096136 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.096149 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198525 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.198606 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304552 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304660 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.304674 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.407967 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.408021 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.408034 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.408057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.408074 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510458 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510478 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.510493 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614389 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614444 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614457 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614477 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.614490 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718236 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718268 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.718284 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.820865 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.820927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.820865 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:16 crc kubenswrapper[4782]: E0202 10:40:16.821122 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:16 crc kubenswrapper[4782]: E0202 10:40:16.821122 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:16 crc kubenswrapper[4782]: E0202 10:40:16.821188 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821494 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821533 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.821564 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.826439 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:29:03.705135415 +0000 UTC Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924323 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924374 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:16 crc kubenswrapper[4782]: I0202 10:40:16.924406 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:16Z","lastTransitionTime":"2026-02-02T10:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027815 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.027882 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131497 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131548 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.131566 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234804 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234817 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.234846 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337322 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337387 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337404 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337428 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.337444 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441623 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441710 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441729 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441753 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.441776 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544427 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544480 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544492 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.544524 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.646950 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749863 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749898 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749909 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.749932 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.820823 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:17 crc kubenswrapper[4782]: E0202 10:40:17.821031 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.826752 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:11:42.161232532 +0000 UTC Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.852961 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.853025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.853045 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.853074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.853091 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.956891 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.956964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.956982 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.957006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:17 crc kubenswrapper[4782]: I0202 10:40:17.957025 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:17Z","lastTransitionTime":"2026-02-02T10:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059836 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059964 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.059982 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163130 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163144 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163170 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.163189 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266273 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266321 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.266333 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370603 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370619 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.370631 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475301 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475373 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.475430 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580275 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580326 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580337 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580364 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.580379 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684534 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684560 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684580 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.684697 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789325 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789373 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789385 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789403 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.789414 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.821042 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:18 crc kubenswrapper[4782]: E0202 10:40:18.821224 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.821489 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:18 crc kubenswrapper[4782]: E0202 10:40:18.821555 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.821800 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:18 crc kubenswrapper[4782]: E0202 10:40:18.821884 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.827680 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:45:33.680861242 +0000 UTC Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893065 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893125 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893165 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.893177 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995750 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995832 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995844 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:18 crc kubenswrapper[4782]: I0202 10:40:18.995884 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:18Z","lastTransitionTime":"2026-02-02T10:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099084 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099094 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.099124 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202304 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202391 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202402 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.202426 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.300932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.300985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.301040 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.301061 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.301075 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.318571 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322911 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322928 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.322942 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.339137 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.343966 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.344011 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.344023 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.344041 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.344056 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.358391 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363592 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363604 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363625 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.363650 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.376364 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380698 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380711 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.380758 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.392674 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9f06aea5-54f4-4b11-8fec-22fbe76ec89b\\\",\\\"systemUUID\\\":\\\"b85e9547-662e-4455-bbaa-2d2f2aaad904\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:19Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.392803 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394822 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394874 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394897 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.394918 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.497983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.498039 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.498055 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.498079 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.498094 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600541 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600589 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600605 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.600616 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703661 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703715 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703728 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703749 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.703764 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.806920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.806979 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.806989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.807012 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.807025 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.820146 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:19 crc kubenswrapper[4782]: E0202 10:40:19.820554 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.828713 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:15:06.326244572 +0000 UTC Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.910974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.911052 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.911067 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.911095 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:19 crc kubenswrapper[4782]: I0202 10:40:19.911110 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:19Z","lastTransitionTime":"2026-02-02T10:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014396 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014487 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014496 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014509 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.014519 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117438 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117493 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117507 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.117541 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220693 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220761 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220772 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220799 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.220818 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323367 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323455 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.323500 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427063 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427175 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.427198 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531510 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531624 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531712 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.531953 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636181 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636198 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636220 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.636235 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742131 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742147 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742237 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.742250 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.821869 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.822120 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:20 crc kubenswrapper[4782]: E0202 10:40:20.822238 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.822257 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:20 crc kubenswrapper[4782]: E0202 10:40:20.822402 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:20 crc kubenswrapper[4782]: E0202 10:40:20.822547 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.828889 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:43:56.053142303 +0000 UTC Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.838980 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845047 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845102 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845114 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845134 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.845145 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.855921 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959c7dca46e7a68f4472c3aa2104c49d4e47190f0fab5c7ad2225e27e5c4585a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.878076 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2642ee4e-c16a-4e6e-9654-a67666f1bff8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:13Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0202 10:40:12.792477 6839 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.792747 6839 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793003 6839 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793147 6839 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793807 6839 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 10:40:12.793810 6839 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 10:40:12.793956 6839 factory.go:656] Stopping watch factory\\\\nI0202 10:40:12.793991 6839 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 10:40:12.815740 6839 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0202 10:40:12.815893 6839 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0202 10:40:12.815971 6839 ovnkube.go:599] Stopped ovnkube\\\\nI0202 10:40:12.816011 6839 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 10:40:12.816108 6839 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:40:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8flt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-prbrn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.890284 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a20e66f3-5da8-4f1e-97c3-28808caf938b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ada79c9ce4369d59a46cc02abd6cf48de6fdc8fdbe39ce9111864f17c15d7b42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4740cbe92e6575bbdc497589f7ef325d88070385a35c69f1ddf7ca0865bb2624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.905257 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc52d94-656d-4294-b105-0f83d22c9664\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"le observer\\\\nW0202 10:39:00.270088 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0202 10:39:00.270415 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 10:39:00.271477 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3666386151/tls.crt::/tmp/serving-cert-3666386151/tls.key\\\\\\\"\\\\nI0202 10:39:00.760414 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 10:39:00.783275 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 10:39:00.783307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 10:39:00.783330 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 10:39:00.783335 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 10:39:00.810370 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 10:39:00.810474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810498 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 10:39:00.810519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 10:39:00.810539 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 10:39:00.810558 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 10:39:00.810577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 10:39:00.810783 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 10:39:00.814783 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.919770 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsqgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04d9744a-e730-45b4-9f0c-bbb5b02cd311\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T10:40:01Z\\\",\\\"message\\\":\\\"2026-02-02T10:39:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c\\\\n2026-02-02T10:39:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5834fa1a-51a3-468c-a754-b8eb3749cc9c to /host/opt/cni/bin/\\\\n2026-02-02T10:39:15Z [verbose] multus-daemon started\\\\n2026-02-02T10:39:15Z [verbose] Readiness Indicator file check\\\\n2026-02-02T10:40:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nrfs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsqgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.935211 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948811 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948858 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948872 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948892 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.948906 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:20Z","lastTransitionTime":"2026-02-02T10:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.957530 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.974979 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7919e98f-cc47-4f3c-9c53-6313850ea7b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://550747a1eaba426f3a6b264d3a5227e51b0171729134224fa76ba55d8ad47c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bhdgk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:20 crc kubenswrapper[4782]: I0202 10:40:20.994035 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1edc5703-bb51-4f8a-9b73-68ba48a40ce8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2765f9fa77bc99e4983b0d6883a7156c960f2dce2c80845cd1e0810199c50eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2a2f7b1b3c3236d77b4858e19cacf5554526262f0fc5703460a169d3b26f7d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d41d28339830a5090ce01715a5bd4d985c3081932cb71594b1e928b900dc353\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://615aa0c2b172d7e7ab8a484a6860646491265e3ebb25f57857c45bc591d40335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143776c340b6d3345f6902124169235fcf74289d9e107b8e6bdb5023f47b73df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://754a40bbe5b071214dbd49089838b58b26d7586fdd8479a6f8522ef49c03e255\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e2e1801b719e8d922f05eab190ff758e85d0d05f4e26f8b8a9d5812ac5ffe6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:39:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:39:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdc79\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8lwfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:20Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.009990 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"324c55ff-8d31-4452-bb4e-2a57fbdb23c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://025c5d7b0067cd9bfd8f87926e7ec57759b83410b2be1bfddc02029f4c8e5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8896767e0b6039745c672852e48a5fceb954162cac8a06257129bcc84efff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxkkz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-x49wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.023095 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e23db96-3af7-4c29-b00f-5920a9431f01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpm5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tv4xc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.036691 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83b81d23-4cbf-48f2-a90d-aa5b4cb3ce45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://638971b143e4defe2f04fba48e554aff3023d52863a0eb6f36c29d394de7eeb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c260f298316c830b06f5e3634cd6c09bd10bbaad77b46e427eb4b039eebb53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://745ed5274a247d61c320043043077f9ecb39fa3734db15aefa07a3bbb6225c26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052767 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052779 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052797 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.052809 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.054760 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35774ab2-362c-466b-9f87-5e152d4c8235\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1f97a7bc0ebb9c8dca5e77de93b5ad8744a3ed0a3939e31500e0bb10648b1c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7d3a0cdcdd628fdec78799be1bb9aeab47b7566b765ba0b033b9e925ece0be6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a72caeec33753f69102774c7bb1501dd1c0f304ab8e821616a7d6748b4b6a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d8715a950ba202dd87b57bd0b7465a0ca0648a865e89ee9bd94848c15675501\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.074433 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aebafb5717a3d34f83e5f1ba346c51bccea6c08fb6ddc2fadc47d8e1a4df1bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://643eee6982fcc4c02b72fb6daf1a18fc5ae263ebf8ac506d1fa745d6c311b38a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.088994 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fptzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0a3c57-fe47-43dd-8905-00df4cae4fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb00c5826e8950791c82faf09cb4417f633c04908afcaed816c63b81c05aae48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6np9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fptzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.101762 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-thvm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70faa63d-a86d-45aa-b6fd-81fa90436da2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb015f1ff28b0f28114d4c5d3c643fdb9af2c24d6d3c4a3f34c051677c815e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrknp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:39:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-thvm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.126914 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59347d2e-80ed-46da-8b2f-87cc48c5b564\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T10:38:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7303c0a57d425d661744c913b418fb647290581f745f973af1507563fa70b860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10b2c4d81a795c40e715af0855002f75e3018dc14560eb97b551186a48c52744\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://667df661cba8be98b22059905ac9fe87f5d2af093d3184cfead863474441574f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe76e80657cb875a6dd9b9c977e6120a670fc5cd1dad2e0346b21a467b00f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58cd2e688da72a74340b25a755164b61039f7615ccda8b24b12b2b4025f89584\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:38:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bd082aadbf2ca42c76305a9762162ad147f39d156c72939f3ee2d847a0b1444\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87274c7b992c99d4f117a71894a64f9fdb3fd4a28df88201f6ee2f99bc0d554c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d32e9ff486b30e8c09503f6b368756cd5eab8327d51cd76676e2881ae8b97920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T10:38:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T10:38:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T10:38:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.143789 4782 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T10:39:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b6dd1dee15020005db2d8655165e09c72a90eaade00a66c0742c0980ccc669e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T10:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T10:40:21Z is after 2025-08-24T17:21:41Z" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156362 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156382 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.156396 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.259929 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.259985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.259998 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.260017 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.260030 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362388 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.362495 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466717 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466785 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466800 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466826 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.466844 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570234 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570594 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570699 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.570926 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.674741 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.674989 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.675003 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.675025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.675040 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778297 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778345 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778383 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.778399 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.820881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:21 crc kubenswrapper[4782]: E0202 10:40:21.821061 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.829880 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:06:34.729653964 +0000 UTC Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.882983 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.883058 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.883074 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.883098 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.883115 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986612 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986683 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986695 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986720 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:21 crc kubenswrapper[4782]: I0202 10:40:21.986734 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:21Z","lastTransitionTime":"2026-02-02T10:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091287 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091332 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091347 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091369 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.091386 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194555 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194579 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194611 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.194726 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298017 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298066 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298081 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.298115 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400238 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400272 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400282 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400298 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.400308 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.502981 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.503059 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.503085 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.503115 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.503137 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605409 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605459 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605468 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605482 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.605493 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708557 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708593 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708630 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.708681 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.811913 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.812209 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.812284 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.812357 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.812425 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.820775 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.820881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:22 crc kubenswrapper[4782]: E0202 10:40:22.820945 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:22 crc kubenswrapper[4782]: E0202 10:40:22.821008 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.821480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:22 crc kubenswrapper[4782]: E0202 10:40:22.821840 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.830698 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:53:23.132359836 +0000 UTC Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.915920 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.916009 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.916033 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.916473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:22 crc kubenswrapper[4782]: I0202 10:40:22.916502 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:22Z","lastTransitionTime":"2026-02-02T10:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019440 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019523 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019551 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.019575 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123410 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123472 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.123534 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232247 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232315 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.232361 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.335764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.336182 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.336290 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.336419 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.336569 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441456 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.441485 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.544599 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.545173 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.545302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.545413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.545500 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.648757 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.649215 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.649400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.649558 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.649712 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753776 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753876 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753903 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753943 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.753971 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.820918 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:23 crc kubenswrapper[4782]: E0202 10:40:23.821439 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.831893 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:22:57.310375393 +0000 UTC Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858828 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858899 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858916 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.858964 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963025 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963087 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963112 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:23 crc kubenswrapper[4782]: I0202 10:40:23.963156 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:23Z","lastTransitionTime":"2026-02-02T10:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067313 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067368 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067378 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067398 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.067410 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172356 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172416 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.172515 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282171 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282245 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.282310 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385212 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385288 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385302 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385327 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.385342 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489127 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489202 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489227 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489254 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.489271 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592166 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592207 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592217 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.592244 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694393 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694473 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694497 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694531 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.694554 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797518 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.797568 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.821073 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.821117 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.821080 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:24 crc kubenswrapper[4782]: E0202 10:40:24.821221 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:24 crc kubenswrapper[4782]: E0202 10:40:24.821331 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:24 crc kubenswrapper[4782]: E0202 10:40:24.821383 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.833201 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:28:35.289084501 +0000 UTC Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900291 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900342 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900353 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:24 crc kubenswrapper[4782]: I0202 10:40:24.900385 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:24Z","lastTransitionTime":"2026-02-02T10:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.003568 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.004026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.004140 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.004544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.004560 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107500 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107545 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107553 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107585 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.107597 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211057 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211132 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211158 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211188 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.211213 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314239 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314324 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.314418 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417240 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417283 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417294 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417308 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.417321 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519922 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519937 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.519968 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622338 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622382 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622394 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622415 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.622428 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725445 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725499 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.725516 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.820517 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:25 crc kubenswrapper[4782]: E0202 10:40:25.820762 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829524 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829613 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829634 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829724 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.829755 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.834015 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:45:59.448927555 +0000 UTC Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932573 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932618 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932627 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932662 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:25 crc kubenswrapper[4782]: I0202 10:40:25.932674 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:25Z","lastTransitionTime":"2026-02-02T10:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037222 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037241 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037271 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.037290 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.141967 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.142019 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.142030 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.142053 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.142064 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245447 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245508 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245521 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245544 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.245560 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349793 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349848 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349860 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349880 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.349894 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452775 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452820 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452834 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452856 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.452868 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.533445 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.533869 4782 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.534030 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs podName:4e23db96-3af7-4c29-b00f-5920a9431f01 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:30.533995332 +0000 UTC m=+170.418188078 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs") pod "network-metrics-daemon-tv4xc" (UID: "4e23db96-3af7-4c29-b00f-5920a9431f01") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556866 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556924 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556942 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.556953 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661226 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661328 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661350 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661380 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.661400 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.765900 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.765955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.765968 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.765990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.766004 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.820558 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.820858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.820858 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.820946 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.821165 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.821867 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.822633 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:26 crc kubenswrapper[4782]: E0202 10:40:26.823012 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.834467 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:32:55.724144434 +0000 UTC Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868829 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868837 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868850 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.868862 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.971932 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.971980 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.971990 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.972006 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:26 crc kubenswrapper[4782]: I0202 10:40:26.972016 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:26Z","lastTransitionTime":"2026-02-02T10:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075037 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075077 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075111 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.075137 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177503 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177514 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177532 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.177544 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279847 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279902 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279914 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279940 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.279954 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382446 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382609 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382796 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382824 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.382841 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485713 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485764 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485778 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485795 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.485807 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588875 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588919 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588955 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588970 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.588981 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.691861 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.691945 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.691962 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.691985 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.692001 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795026 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795120 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795148 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.795169 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.820136 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:27 crc kubenswrapper[4782]: E0202 10:40:27.820326 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.835209 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:34:53.674641704 +0000 UTC Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897666 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897681 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897702 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:27 crc kubenswrapper[4782]: I0202 10:40:27.897716 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:27Z","lastTransitionTime":"2026-02-02T10:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000263 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000314 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000328 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000346 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.000358 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103031 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103097 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103126 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103154 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.103171 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206289 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206381 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206404 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.206420 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309225 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309242 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.309253 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411430 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411471 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411485 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411502 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.411511 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514151 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514210 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514219 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514233 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.514242 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617044 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617121 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617137 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617160 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.617176 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719906 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719946 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719956 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719974 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.719986 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.820139 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.820199 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:28 crc kubenswrapper[4782]: E0202 10:40:28.820303 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.820503 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:28 crc kubenswrapper[4782]: E0202 10:40:28.820550 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:28 crc kubenswrapper[4782]: E0202 10:40:28.820677 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821595 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821650 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821659 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821674 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.821683 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.835618 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:23:02.808481367 +0000 UTC Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924029 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924090 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924106 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924128 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:28 crc kubenswrapper[4782]: I0202 10:40:28.924146 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:28Z","lastTransitionTime":"2026-02-02T10:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036725 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036769 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036781 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036798 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.036811 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140537 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140581 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140596 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140614 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.140626 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.243469 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.243516 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.243886 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.243931 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.244081 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347179 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347463 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347475 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347491 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.347519 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450371 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450434 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450457 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450484 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.450506 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553360 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553400 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553413 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553432 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.553445 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657414 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657453 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657461 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657474 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.657486 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728054 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728101 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728113 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728129 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.728142 4782 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T10:40:29Z","lastTransitionTime":"2026-02-02T10:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.770900 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r"] Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.771321 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.772983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.774138 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.774217 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.775060 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.820949 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:29 crc kubenswrapper[4782]: E0202 10:40:29.821144 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.826266 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fsqgq" podStartSLOduration=81.826248062 podStartE2EDuration="1m21.826248062s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.826204051 +0000 UTC m=+109.710396767" watchObservedRunningTime="2026-02-02 10:40:29.826248062 +0000 UTC m=+109.710440798" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.836501 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:52:52.704495709 +0000 UTC Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.836560 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.845074 4782 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873250 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873286 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873406 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.873470 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.883687 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.883613141 podStartE2EDuration="1m25.883613141s" podCreationTimestamp="2026-02-02 10:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.865188158 +0000 UTC m=+109.749380904" watchObservedRunningTime="2026-02-02 10:40:29.883613141 +0000 UTC m=+109.767805857" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.905994 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.905974755 podStartE2EDuration="58.905974755s" podCreationTimestamp="2026-02-02 10:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.884153516 +0000 UTC m=+109.768346242" watchObservedRunningTime="2026-02-02 10:40:29.905974755 +0000 UTC m=+109.790167471" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.930510 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podStartSLOduration=82.930492991 podStartE2EDuration="1m22.930492991s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.905819581 +0000 UTC m=+109.790012297" watchObservedRunningTime="2026-02-02 10:40:29.930492991 +0000 UTC m=+109.814685717" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.930614 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8lwfx" podStartSLOduration=81.930609815 podStartE2EDuration="1m21.930609815s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.930029378 +0000 UTC m=+109.814222094" watchObservedRunningTime="2026-02-02 10:40:29.930609815 +0000 UTC m=+109.814802531" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.973946 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974203 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974279 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974379 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974458 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.974552 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.975381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.979807 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.983852 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=85.983833595 podStartE2EDuration="1m25.983833595s" podCreationTimestamp="2026-02-02 10:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.980690596 +0000 UTC m=+109.864883312" watchObservedRunningTime="2026-02-02 10:40:29.983833595 +0000 UTC m=+109.868026311" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.984309 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-x49wn" podStartSLOduration=81.984299149 podStartE2EDuration="1m21.984299149s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:29.949263594 +0000 UTC m=+109.833456310" watchObservedRunningTime="2026-02-02 10:40:29.984299149 +0000 UTC m=+109.868491865" Feb 02 10:40:29 crc kubenswrapper[4782]: I0202 10:40:29.991244 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f69040ea-c39f-4efe-8d3a-8f1fc06d7652-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cq4r\" (UID: \"f69040ea-c39f-4efe-8d3a-8f1fc06d7652\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.033774 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fptzv" podStartSLOduration=83.033755913 podStartE2EDuration="1m23.033755913s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:30.03296553 +0000 UTC m=+109.917158256" watchObservedRunningTime="2026-02-02 10:40:30.033755913 +0000 UTC m=+109.917948649" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.042604 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-thvm5" podStartSLOduration=83.042587483 podStartE2EDuration="1m23.042587483s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:30.042233253 +0000 UTC m=+109.926425969" watchObservedRunningTime="2026-02-02 10:40:30.042587483 +0000 UTC m=+109.926780199" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.051796 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=44.051778864 podStartE2EDuration="44.051778864s" podCreationTimestamp="2026-02-02 10:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:30.050888139 +0000 UTC m=+109.935080855" watchObservedRunningTime="2026-02-02 10:40:30.051778864 +0000 UTC m=+109.935971580" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.064302 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.064287569 podStartE2EDuration="1m29.064287569s" podCreationTimestamp="2026-02-02 10:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:30.063893088 +0000 UTC m=+109.948085804" watchObservedRunningTime="2026-02-02 10:40:30.064287569 +0000 UTC m=+109.948480285" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.094105 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.386235 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" event={"ID":"f69040ea-c39f-4efe-8d3a-8f1fc06d7652","Type":"ContainerStarted","Data":"c3d093a874f66d9dcad7c58c590cba8445470e229ae98667fe451e55e36c30de"} Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.820708 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:30 crc kubenswrapper[4782]: E0202 10:40:30.821692 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.821863 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:30 crc kubenswrapper[4782]: E0202 10:40:30.821928 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:30 crc kubenswrapper[4782]: I0202 10:40:30.822013 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:30 crc kubenswrapper[4782]: E0202 10:40:30.822132 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:31 crc kubenswrapper[4782]: I0202 10:40:31.395189 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" event={"ID":"f69040ea-c39f-4efe-8d3a-8f1fc06d7652","Type":"ContainerStarted","Data":"5f68cc8e5fb8e066a1514b75c86a87c935900172bef6bf00384642e7ea5e9a0e"} Feb 02 10:40:31 crc kubenswrapper[4782]: I0202 10:40:31.409414 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cq4r" podStartSLOduration=83.409399241 podStartE2EDuration="1m23.409399241s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:31.408742332 +0000 UTC m=+111.292935058" watchObservedRunningTime="2026-02-02 10:40:31.409399241 +0000 UTC m=+111.293591957" Feb 02 10:40:31 crc kubenswrapper[4782]: I0202 10:40:31.821025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:31 crc kubenswrapper[4782]: E0202 10:40:31.821141 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:32 crc kubenswrapper[4782]: I0202 10:40:32.820969 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:32 crc kubenswrapper[4782]: I0202 10:40:32.820979 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:32 crc kubenswrapper[4782]: I0202 10:40:32.820993 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:32 crc kubenswrapper[4782]: E0202 10:40:32.821166 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:32 crc kubenswrapper[4782]: E0202 10:40:32.821355 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:32 crc kubenswrapper[4782]: E0202 10:40:32.821464 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:33 crc kubenswrapper[4782]: I0202 10:40:33.828968 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:33 crc kubenswrapper[4782]: E0202 10:40:33.829183 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:34 crc kubenswrapper[4782]: I0202 10:40:34.821875 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:34 crc kubenswrapper[4782]: I0202 10:40:34.821986 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:34 crc kubenswrapper[4782]: E0202 10:40:34.822082 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:34 crc kubenswrapper[4782]: E0202 10:40:34.822218 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:34 crc kubenswrapper[4782]: I0202 10:40:34.822017 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:34 crc kubenswrapper[4782]: E0202 10:40:34.822374 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:35 crc kubenswrapper[4782]: I0202 10:40:35.820522 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:35 crc kubenswrapper[4782]: E0202 10:40:35.820800 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:36 crc kubenswrapper[4782]: I0202 10:40:36.820858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:36 crc kubenswrapper[4782]: I0202 10:40:36.820925 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:36 crc kubenswrapper[4782]: I0202 10:40:36.821012 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:36 crc kubenswrapper[4782]: E0202 10:40:36.821132 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:36 crc kubenswrapper[4782]: E0202 10:40:36.821290 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:36 crc kubenswrapper[4782]: E0202 10:40:36.821370 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:37 crc kubenswrapper[4782]: I0202 10:40:37.820902 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:37 crc kubenswrapper[4782]: E0202 10:40:37.821096 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:38 crc kubenswrapper[4782]: I0202 10:40:38.821005 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:38 crc kubenswrapper[4782]: I0202 10:40:38.821095 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:38 crc kubenswrapper[4782]: E0202 10:40:38.821242 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:38 crc kubenswrapper[4782]: I0202 10:40:38.821025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:38 crc kubenswrapper[4782]: E0202 10:40:38.821375 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:38 crc kubenswrapper[4782]: E0202 10:40:38.821435 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:39 crc kubenswrapper[4782]: I0202 10:40:39.821168 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:39 crc kubenswrapper[4782]: E0202 10:40:39.822118 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:40 crc kubenswrapper[4782]: I0202 10:40:40.821083 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:40 crc kubenswrapper[4782]: I0202 10:40:40.821148 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.821693 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:40 crc kubenswrapper[4782]: I0202 10:40:40.822093 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.822211 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.822378 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-prbrn_openshift-ovn-kubernetes(2642ee4e-c16a-4e6e-9654-a67666f1bff8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" Feb 02 10:40:40 crc kubenswrapper[4782]: I0202 10:40:40.822498 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.822583 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.856605 4782 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 10:40:40 crc kubenswrapper[4782]: E0202 10:40:40.980264 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:40:41 crc kubenswrapper[4782]: I0202 10:40:41.845256 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:41 crc kubenswrapper[4782]: E0202 10:40:41.845473 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:42 crc kubenswrapper[4782]: I0202 10:40:42.820524 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:42 crc kubenswrapper[4782]: E0202 10:40:42.820781 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:42 crc kubenswrapper[4782]: I0202 10:40:42.820524 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:42 crc kubenswrapper[4782]: E0202 10:40:42.820910 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:42 crc kubenswrapper[4782]: I0202 10:40:42.820981 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:42 crc kubenswrapper[4782]: E0202 10:40:42.821072 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:43 crc kubenswrapper[4782]: I0202 10:40:43.821070 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:43 crc kubenswrapper[4782]: E0202 10:40:43.821287 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:44 crc kubenswrapper[4782]: I0202 10:40:44.820851 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:44 crc kubenswrapper[4782]: I0202 10:40:44.820909 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:44 crc kubenswrapper[4782]: I0202 10:40:44.820854 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:44 crc kubenswrapper[4782]: E0202 10:40:44.821098 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:44 crc kubenswrapper[4782]: E0202 10:40:44.821255 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:44 crc kubenswrapper[4782]: E0202 10:40:44.821328 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:45 crc kubenswrapper[4782]: I0202 10:40:45.820483 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:45 crc kubenswrapper[4782]: E0202 10:40:45.820750 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:45 crc kubenswrapper[4782]: E0202 10:40:45.981560 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:40:46 crc kubenswrapper[4782]: I0202 10:40:46.821003 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:46 crc kubenswrapper[4782]: I0202 10:40:46.821615 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:46 crc kubenswrapper[4782]: E0202 10:40:46.821674 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:46 crc kubenswrapper[4782]: E0202 10:40:46.821840 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:46 crc kubenswrapper[4782]: I0202 10:40:46.821141 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:46 crc kubenswrapper[4782]: E0202 10:40:46.821946 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.457920 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/1.log" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.458620 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/0.log" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.458691 4782 generic.go:334] "Generic (PLEG): container finished" podID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" containerID="b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad" exitCode=1 Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.458738 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerDied","Data":"b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad"} Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.458783 4782 scope.go:117] "RemoveContainer" containerID="9855ee45d6787f239c2099e99544f50b4939dc9037eb5f4cfe36af9a0bed8937" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.464426 4782 scope.go:117] "RemoveContainer" containerID="b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad" Feb 02 10:40:47 crc kubenswrapper[4782]: E0202 10:40:47.465441 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fsqgq_openshift-multus(04d9744a-e730-45b4-9f0c-bbb5b02cd311)\"" pod="openshift-multus/multus-fsqgq" podUID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" Feb 02 10:40:47 crc kubenswrapper[4782]: I0202 10:40:47.820820 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:47 crc kubenswrapper[4782]: E0202 10:40:47.820994 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:48 crc kubenswrapper[4782]: I0202 10:40:48.464308 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/1.log" Feb 02 10:40:48 crc kubenswrapper[4782]: I0202 10:40:48.820954 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:48 crc kubenswrapper[4782]: I0202 10:40:48.820975 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:48 crc kubenswrapper[4782]: I0202 10:40:48.821110 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:48 crc kubenswrapper[4782]: E0202 10:40:48.821202 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:48 crc kubenswrapper[4782]: E0202 10:40:48.821308 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:48 crc kubenswrapper[4782]: E0202 10:40:48.821504 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:49 crc kubenswrapper[4782]: I0202 10:40:49.820620 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:49 crc kubenswrapper[4782]: E0202 10:40:49.821074 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:50 crc kubenswrapper[4782]: I0202 10:40:50.820198 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:50 crc kubenswrapper[4782]: I0202 10:40:50.820196 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:50 crc kubenswrapper[4782]: I0202 10:40:50.820265 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:50 crc kubenswrapper[4782]: E0202 10:40:50.821172 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:50 crc kubenswrapper[4782]: E0202 10:40:50.821279 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:50 crc kubenswrapper[4782]: E0202 10:40:50.821407 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:50 crc kubenswrapper[4782]: E0202 10:40:50.982333 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:40:51 crc kubenswrapper[4782]: I0202 10:40:51.820762 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:51 crc kubenswrapper[4782]: E0202 10:40:51.820963 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:52 crc kubenswrapper[4782]: I0202 10:40:52.821133 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:52 crc kubenswrapper[4782]: I0202 10:40:52.821231 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:52 crc kubenswrapper[4782]: E0202 10:40:52.821275 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:52 crc kubenswrapper[4782]: I0202 10:40:52.821154 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:52 crc kubenswrapper[4782]: E0202 10:40:52.821425 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:52 crc kubenswrapper[4782]: E0202 10:40:52.821512 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:53 crc kubenswrapper[4782]: I0202 10:40:53.820473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:53 crc kubenswrapper[4782]: E0202 10:40:53.820752 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:53 crc kubenswrapper[4782]: I0202 10:40:53.821892 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.492748 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/3.log" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.496203 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerStarted","Data":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.496814 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.530237 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podStartSLOduration=106.530197389 podStartE2EDuration="1m46.530197389s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:40:54.527957704 +0000 UTC m=+134.412150440" watchObservedRunningTime="2026-02-02 10:40:54.530197389 +0000 UTC m=+134.414390145" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.724386 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tv4xc"] Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.724567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:54 crc kubenswrapper[4782]: E0202 10:40:54.724755 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.820797 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:54 crc kubenswrapper[4782]: I0202 10:40:54.820798 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:54 crc kubenswrapper[4782]: E0202 10:40:54.820947 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:54 crc kubenswrapper[4782]: E0202 10:40:54.821038 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:55 crc kubenswrapper[4782]: I0202 10:40:55.820604 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:55 crc kubenswrapper[4782]: I0202 10:40:55.820634 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:55 crc kubenswrapper[4782]: E0202 10:40:55.820794 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:55 crc kubenswrapper[4782]: E0202 10:40:55.820838 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:55 crc kubenswrapper[4782]: E0202 10:40:55.983399 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:40:56 crc kubenswrapper[4782]: I0202 10:40:56.820912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:56 crc kubenswrapper[4782]: I0202 10:40:56.820969 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:56 crc kubenswrapper[4782]: E0202 10:40:56.822018 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:56 crc kubenswrapper[4782]: E0202 10:40:56.822413 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:57 crc kubenswrapper[4782]: I0202 10:40:57.820454 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:57 crc kubenswrapper[4782]: I0202 10:40:57.820525 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:57 crc kubenswrapper[4782]: E0202 10:40:57.820712 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:57 crc kubenswrapper[4782]: E0202 10:40:57.820846 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:40:58 crc kubenswrapper[4782]: I0202 10:40:58.820410 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:40:58 crc kubenswrapper[4782]: I0202 10:40:58.820591 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:40:58 crc kubenswrapper[4782]: E0202 10:40:58.821977 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:40:58 crc kubenswrapper[4782]: E0202 10:40:58.822087 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:40:59 crc kubenswrapper[4782]: I0202 10:40:59.820611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:40:59 crc kubenswrapper[4782]: I0202 10:40:59.821269 4782 scope.go:117] "RemoveContainer" containerID="b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad" Feb 02 10:40:59 crc kubenswrapper[4782]: I0202 10:40:59.820611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:40:59 crc kubenswrapper[4782]: E0202 10:40:59.821539 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:40:59 crc kubenswrapper[4782]: E0202 10:40:59.821683 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.520885 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/1.log" Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.520956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d"} Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.812858 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.821893 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:00 crc kubenswrapper[4782]: I0202 10:41:00.821958 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:00 crc kubenswrapper[4782]: E0202 10:41:00.822120 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:41:00 crc kubenswrapper[4782]: E0202 10:41:00.822274 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:41:00 crc kubenswrapper[4782]: E0202 10:41:00.985560 4782 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:41:01 crc kubenswrapper[4782]: I0202 10:41:01.821001 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:01 crc kubenswrapper[4782]: E0202 10:41:01.821481 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:41:01 crc kubenswrapper[4782]: I0202 10:41:01.820995 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:01 crc kubenswrapper[4782]: E0202 10:41:01.821788 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:41:02 crc kubenswrapper[4782]: I0202 10:41:02.820731 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:02 crc kubenswrapper[4782]: I0202 10:41:02.820748 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:02 crc kubenswrapper[4782]: E0202 10:41:02.820967 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:41:02 crc kubenswrapper[4782]: E0202 10:41:02.821054 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:41:03 crc kubenswrapper[4782]: I0202 10:41:03.820567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:03 crc kubenswrapper[4782]: I0202 10:41:03.820566 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:03 crc kubenswrapper[4782]: E0202 10:41:03.821372 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:41:03 crc kubenswrapper[4782]: E0202 10:41:03.821573 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:41:04 crc kubenswrapper[4782]: I0202 10:41:04.821011 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:04 crc kubenswrapper[4782]: E0202 10:41:04.821299 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 10:41:04 crc kubenswrapper[4782]: I0202 10:41:04.821679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:04 crc kubenswrapper[4782]: E0202 10:41:04.821823 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 10:41:05 crc kubenswrapper[4782]: I0202 10:41:05.821018 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:05 crc kubenswrapper[4782]: E0202 10:41:05.821353 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tv4xc" podUID="4e23db96-3af7-4c29-b00f-5920a9431f01" Feb 02 10:41:05 crc kubenswrapper[4782]: I0202 10:41:05.821623 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:05 crc kubenswrapper[4782]: E0202 10:41:05.821939 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.821912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.821913 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.825720 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.826243 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.825978 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 10:41:06 crc kubenswrapper[4782]: I0202 10:41:06.826610 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 10:41:07 crc kubenswrapper[4782]: I0202 10:41:07.820765 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:07 crc kubenswrapper[4782]: I0202 10:41:07.820832 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:07 crc kubenswrapper[4782]: I0202 10:41:07.824623 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 10:41:07 crc kubenswrapper[4782]: I0202 10:41:07.824909 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.907954 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.908126 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.908174 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.908217 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:08 crc kubenswrapper[4782]: E0202 10:41:08.908273 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:43:10.908216135 +0000 UTC m=+270.792408861 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.908349 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.912724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.914698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.916489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.924912 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.939626 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:08 crc kubenswrapper[4782]: I0202 10:41:08.944459 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.048554 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 10:41:09 crc kubenswrapper[4782]: W0202 10:41:09.193157 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b968474029565dafc027d95578ce5f8b251b1a9e5170404e3ec44d182870a1d3 WatchSource:0}: Error finding container b968474029565dafc027d95578ce5f8b251b1a9e5170404e3ec44d182870a1d3: Status 404 returned error can't find the container with id b968474029565dafc027d95578ce5f8b251b1a9e5170404e3ec44d182870a1d3 Feb 02 10:41:09 crc kubenswrapper[4782]: W0202 10:41:09.326386 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-7682795dd43cf066fc97997f6f60658588d0a5cb6413c377852dde42fda6eec5 WatchSource:0}: Error finding container 7682795dd43cf066fc97997f6f60658588d0a5cb6413c377852dde42fda6eec5: Status 404 returned error can't find the container with id 7682795dd43cf066fc97997f6f60658588d0a5cb6413c377852dde42fda6eec5 Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.559018 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"51a938cf68dd53116ad972c650431693f3cb6ec5aa1ed0fa2a55dbba6adfbd93"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.559112 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b968474029565dafc027d95578ce5f8b251b1a9e5170404e3ec44d182870a1d3"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.559354 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.560566 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"49c545f6456a1a5041e4d9a29bca5adff910f19a4469f4d698a26a0b626ec70b"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.560611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5aed476e808f92c6201ee00ddc26483607cecd066b34d514b049289b3e95254c"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.562028 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7898e7b0b477028cb13ab2465d17f8f7dbef9a14c976c8dfe4d7a9098082f242"} Feb 02 10:41:09 crc kubenswrapper[4782]: I0202 10:41:09.562065 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7682795dd43cf066fc97997f6f60658588d0a5cb6413c377852dde42fda6eec5"} Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.553481 4782 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.609295 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.610420 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.610657 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.612179 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.612354 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.615011 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.615165 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.620493 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.621189 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625492 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7v92z"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.635548 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.635845 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5br4b"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.621191 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.621276 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.636199 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4b45h"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.621382 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.636432 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b98c7"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622105 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.636583 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.636753 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.637069 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.637163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622247 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622355 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622418 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622576 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.638418 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622803 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.622974 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.623049 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625480 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625608 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625724 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625796 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.639039 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625860 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.625944 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.628914 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.639434 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.639838 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.640006 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.640281 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.640850 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.642398 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.642725 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644917 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-config\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644943 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-config\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644960 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.644995 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645010 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-config\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645026 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466b6b9-d1d5-471e-90e7-75f07078f8dc-serving-cert\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645040 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvws\" (UniqueName: \"kubernetes.io/projected/063dd8d0-356e-4c11-96fd-6ecee1f28da8-kube-api-access-6xvws\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645074 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645090 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84lv6\" (UniqueName: \"kubernetes.io/projected/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-kube-api-access-84lv6\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645107 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645126 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645157 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645176 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfe41db-4509-43cc-a95c-9ac09e6c9390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645192 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfe41db-4509-43cc-a95c-9ac09e6c9390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645208 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645227 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5r4w\" (UniqueName: \"kubernetes.io/projected/5466b6b9-d1d5-471e-90e7-75f07078f8dc-kube-api-access-k5r4w\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645242 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645259 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645275 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645291 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-client\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645307 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-machine-approver-tls\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645337 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645353 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa5788-ab19-4e50-bc93-31b7a5069b32-serving-cert\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645369 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/acfa5788-ab19-4e50-bc93-31b7a5069b32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645385 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mr74\" (UniqueName: \"kubernetes.io/projected/e74c7e17-c70b-4637-ad47-58e1e192c52e-kube-api-access-5mr74\") pod \"downloads-7954f5f757-4b45h\" (UID: \"e74c7e17-c70b-4637-ad47-58e1e192c52e\") " pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645400 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645418 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbjg\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-kube-api-access-pdbjg\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-encryption-config\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645463 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-images\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645479 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8vc\" (UniqueName: \"kubernetes.io/projected/ccfe41db-4509-43cc-a95c-9ac09e6c9390-kube-api-access-lr8vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645496 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stzk\" (UniqueName: \"kubernetes.io/projected/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-kube-api-access-7stzk\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645529 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-serving-cert\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645547 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-dir\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645582 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6ww\" (UniqueName: \"kubernetes.io/projected/082079e0-8d5a-4d2e-959e-0366e4787bd5-kube-api-access-8n6ww\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645799 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-policies\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-auth-proxy-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645930 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645956 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.645985 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646016 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646067 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646093 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063dd8d0-356e-4c11-96fd-6ecee1f28da8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646121 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8sc\" (UniqueName: \"kubernetes.io/projected/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-kube-api-access-fl8sc\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646175 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77znk\" (UniqueName: \"kubernetes.io/projected/acfa5788-ab19-4e50-bc93-31b7a5069b32-kube-api-access-77znk\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.646200 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.651764 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z8gmg"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.653057 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.663185 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.664537 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.667373 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.667589 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.675167 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.675803 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.679396 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n294"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.695427 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.696380 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.696921 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.697211 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.704980 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.705388 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.715396 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.716088 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.733870 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.734495 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.734807 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-29qjf"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.735277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.736053 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.736351 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.738953 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.740557 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.740994 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.741302 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.741734 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.741916 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.742219 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.742536 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.742940 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjf8d"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743146 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743419 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743575 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743791 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743974 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.743459 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744255 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744371 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744622 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744760 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744876 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.744986 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.745474 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.745604 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.746887 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747188 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747308 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747430 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747550 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747709 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747846 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.747972 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748093 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748206 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748517 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748718 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.748839 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749013 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749157 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749269 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749388 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749579 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.749947 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750124 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750292 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750492 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750560 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.750286 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.751611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.751976 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-encryption-config\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752025 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-images\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752065 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752097 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752125 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hdn\" (UniqueName: \"kubernetes.io/projected/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-kube-api-access-w2hdn\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752148 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752175 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752204 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8vc\" (UniqueName: \"kubernetes.io/projected/ccfe41db-4509-43cc-a95c-9ac09e6c9390-kube-api-access-lr8vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752225 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752243 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1136cbad-9e47-49e0-a890-83d86d325537-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752279 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e5e11c7-6a7f-466b-8d59-674bb931db4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752302 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7stzk\" (UniqueName: \"kubernetes.io/projected/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-kube-api-access-7stzk\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752323 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit-dir\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752341 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8224e4c0-380b-489a-98d8-ee1b15c1637a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752357 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtfk5\" (UniqueName: \"kubernetes.io/projected/1f4f42b8-506a-4922-b7c4-7f77afbb238c-kube-api-access-xtfk5\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752380 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752407 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-serving-cert\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752433 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-dir\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752479 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752504 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752530 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-image-import-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752551 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-serving-cert\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n6ww\" (UniqueName: \"kubernetes.io/projected/082079e0-8d5a-4d2e-959e-0366e4787bd5-kube-api-access-8n6ww\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752603 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-policies\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-node-pullsecrets\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752676 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-auth-proxy-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752701 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752728 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752754 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5e11c7-6a7f-466b-8d59-674bb931db4c-config\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752775 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-client\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-encryption-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752814 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752831 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752853 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671fcd5f-c44a-46e7-840f-d204d2464822-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752874 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671fcd5f-c44a-46e7-840f-d204d2464822-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752907 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752929 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752947 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063dd8d0-356e-4c11-96fd-6ecee1f28da8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752967 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.752989 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753011 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8sc\" (UniqueName: \"kubernetes.io/projected/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-kube-api-access-fl8sc\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-stats-auth\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753058 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77znk\" (UniqueName: \"kubernetes.io/projected/acfa5788-ab19-4e50-bc93-31b7a5069b32-kube-api-access-77znk\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753083 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753104 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5e11c7-6a7f-466b-8d59-674bb931db4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753124 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-srv-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753150 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tpw2\" (UniqueName: \"kubernetes.io/projected/86721216-38a9-4b44-8e34-d01a33c39e82-kube-api-access-9tpw2\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753177 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-config\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753198 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-config\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753246 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-config\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753290 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466b6b9-d1d5-471e-90e7-75f07078f8dc-serving-cert\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753312 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753357 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753382 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1136cbad-9e47-49e0-a890-83d86d325537-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753427 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753452 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xvws\" (UniqueName: \"kubernetes.io/projected/063dd8d0-356e-4c11-96fd-6ecee1f28da8-kube-api-access-6xvws\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753472 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753492 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753526 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84lv6\" (UniqueName: \"kubernetes.io/projected/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-kube-api-access-84lv6\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753567 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-default-certificate\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753606 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfe41db-4509-43cc-a95c-9ac09e6c9390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753627 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753670 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfe41db-4509-43cc-a95c-9ac09e6c9390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753714 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671fcd5f-c44a-46e7-840f-d204d2464822-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753735 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753757 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753777 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5r4w\" (UniqueName: \"kubernetes.io/projected/5466b6b9-d1d5-471e-90e7-75f07078f8dc-kube-api-access-k5r4w\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753798 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753821 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-serving-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753845 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-metrics-certs\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753916 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753940 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753962 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.753986 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754011 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-client\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754035 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-machine-approver-tls\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754061 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754086 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqdm\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-kube-api-access-gmqdm\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mr74\" (UniqueName: \"kubernetes.io/projected/e74c7e17-c70b-4637-ad47-58e1e192c52e-kube-api-access-5mr74\") pod \"downloads-7954f5f757-4b45h\" (UID: \"e74c7e17-c70b-4637-ad47-58e1e192c52e\") " pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754168 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754193 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa5788-ab19-4e50-bc93-31b7a5069b32-serving-cert\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754220 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/acfa5788-ab19-4e50-bc93-31b7a5069b32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754262 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754288 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dch5s\" (UniqueName: \"kubernetes.io/projected/8224e4c0-380b-489a-98d8-ee1b15c1637a-kube-api-access-dch5s\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754317 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754348 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754376 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbjg\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-kube-api-access-pdbjg\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754405 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.754431 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-service-ca-bundle\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755064 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755337 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755497 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755694 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.755888 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756045 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756228 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756403 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756543 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756686 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.756958 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.757086 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.757118 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.757323 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.757339 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.758186 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-policies\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.758992 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-images\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.759262 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.760086 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.760577 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.762099 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.769664 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.770422 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.770927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.771161 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-config\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.771268 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.772127 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.772450 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.772756 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.772910 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773213 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773398 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773485 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773819 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773997 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.777358 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.778261 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.778730 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5466b6b9-d1d5-471e-90e7-75f07078f8dc-serving-cert\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.779818 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acfa5788-ab19-4e50-bc93-31b7a5069b32-serving-cert\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.780775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/acfa5788-ab19-4e50-bc93-31b7a5069b32-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.773629 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-auth-proxy-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.781954 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/082079e0-8d5a-4d2e-959e-0366e4787bd5-audit-dir\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.782209 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.785258 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.782587 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.783288 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.783428 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccfe41db-4509-43cc-a95c-9ac09e6c9390-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.785866 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082079e0-8d5a-4d2e-959e-0366e4787bd5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.782496 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.783669 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.793014 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-machine-approver-tls\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.793476 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063dd8d0-356e-4c11-96fd-6ecee1f28da8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.793971 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.795058 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.795682 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063dd8d0-356e-4c11-96fd-6ecee1f28da8-config\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.796023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.801165 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.804697 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.804794 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.804899 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-encryption-config\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.805194 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-etcd-client\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.805366 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.806054 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.820167 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.820734 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccfe41db-4509-43cc-a95c-9ac09e6c9390-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.849922 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.852205 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.860886 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/082079e0-8d5a-4d2e-959e-0366e4787bd5-serving-cert\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.874101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5466b6b9-d1d5-471e-90e7-75f07078f8dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.882866 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2hdn\" (UniqueName: \"kubernetes.io/projected/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-kube-api-access-w2hdn\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.882930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.882953 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.883891 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.884549 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885173 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1136cbad-9e47-49e0-a890-83d86d325537-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885204 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885223 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885269 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e5e11c7-6a7f-466b-8d59-674bb931db4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit-dir\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8224e4c0-380b-489a-98d8-ee1b15c1637a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885364 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtfk5\" (UniqueName: \"kubernetes.io/projected/1f4f42b8-506a-4922-b7c4-7f77afbb238c-kube-api-access-xtfk5\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885393 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885421 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885455 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-image-import-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885475 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-serving-cert\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885511 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-node-pullsecrets\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.885535 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c29fd\" (UniqueName: \"kubernetes.io/projected/ed24c96e-c389-443d-bdcf-b6fd727d472e-kube-api-access-c29fd\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886245 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886736 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-audit-dir\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.886974 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f4f42b8-506a-4922-b7c4-7f77afbb238c-node-pullsecrets\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888269 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888477 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-image-import-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5e11c7-6a7f-466b-8d59-674bb931db4c-config\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888672 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-client\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.888809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-encryption-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.889374 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.889991 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.890849 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1136cbad-9e47-49e0-a890-83d86d325537-metrics-tls\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.890856 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bhjwh"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.890984 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.891692 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84dzp"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.892066 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.892473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.892724 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.892895 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.896954 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.897188 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-serving-cert\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.898018 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.898169 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.900022 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.900350 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.903064 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.903504 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-client\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.903812 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.904280 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.904764 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.904926 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.905206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f4f42b8-506a-4922-b7c4-7f77afbb238c-encryption-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.905602 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671fcd5f-c44a-46e7-840f-d204d2464822-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.905721 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671fcd5f-c44a-46e7-840f-d204d2464822-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.905798 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.906874 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-srv-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.906955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-stats-auth\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907101 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5e11c7-6a7f-466b-8d59-674bb931db4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907132 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tpw2\" (UniqueName: \"kubernetes.io/projected/86721216-38a9-4b44-8e34-d01a33c39e82-kube-api-access-9tpw2\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907189 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed24c96e-c389-443d-bdcf-b6fd727d472e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907225 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed24c96e-c389-443d-bdcf-b6fd727d472e-proxy-tls\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907372 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.907405 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1136cbad-9e47-49e0-a890-83d86d325537-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908250 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-default-certificate\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908368 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671fcd5f-c44a-46e7-840f-d204d2464822-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908393 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908451 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-metrics-certs\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908488 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908514 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-serving-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908540 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908593 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908626 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908676 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908703 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqdm\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-kube-api-access-gmqdm\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908738 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908799 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908835 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dch5s\" (UniqueName: \"kubernetes.io/projected/8224e4c0-380b-489a-98d8-ee1b15c1637a-kube-api-access-dch5s\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908860 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908894 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.908917 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-service-ca-bundle\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.910015 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.910189 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.910789 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.912476 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-config\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.914586 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.916137 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.916348 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.916871 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.917303 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.917379 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.918041 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbq6z"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.918256 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.918991 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f4f42b8-506a-4922-b7c4-7f77afbb238c-etcd-serving-ca\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.919554 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.919579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.925051 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1136cbad-9e47-49e0-a890-83d86d325537-trusted-ca\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.926086 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.937398 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.938029 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.938373 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.939049 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.940272 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.941680 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qsnhv"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.942622 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.957400 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.963843 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.964484 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.980984 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.981760 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.982723 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.984027 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj"] Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.984135 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.985392 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.986261 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-default-certificate\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.988891 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-config\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:10 crc kubenswrapper[4782]: I0202 10:41:10.997231 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.001600 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.002898 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.003039 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-config\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.003153 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86721216-38a9-4b44-8e34-d01a33c39e82-srv-cert\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.003299 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.004924 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7v92z"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.009052 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010531 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010265 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvlbc\" (UniqueName: \"kubernetes.io/projected/1e893e98-9670-49d0-8312-d78c86a14ba4-kube-api-access-qvlbc\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010760 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxr7\" (UniqueName: \"kubernetes.io/projected/4e9af173-4335-4ebd-9b11-dfb4180e968b-kube-api-access-zhxr7\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.010854 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011052 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgqdc\" (UniqueName: \"kubernetes.io/projected/b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d-kube-api-access-hgqdc\") pod \"migrator-59844c95c7-bpzbh\" (UID: \"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011307 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c29fd\" (UniqueName: \"kubernetes.io/projected/ed24c96e-c389-443d-bdcf-b6fd727d472e-kube-api-access-c29fd\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011388 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011420 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxp9b\" (UniqueName: \"kubernetes.io/projected/9ee676ac-a60f-4855-949f-d3210f9314f5-kube-api-access-lxp9b\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011502 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.011943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed24c96e-c389-443d-bdcf-b6fd727d472e-proxy-tls\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012008 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed24c96e-c389-443d-bdcf-b6fd727d472e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012048 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012116 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-images\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012591 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdff8\" (UniqueName: \"kubernetes.io/projected/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-kube-api-access-bdff8\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.012704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.013016 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ee676ac-a60f-4855-949f-d3210f9314f5-proxy-tls\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.013087 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.013116 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.013151 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ml7\" (UniqueName: \"kubernetes.io/projected/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-kube-api-access-l2ml7\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.014245 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed24c96e-c389-443d-bdcf-b6fd727d472e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.014487 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.017721 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-metrics-certs\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.018850 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-stats-auth\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.026260 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.027180 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.027297 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e9af173-4335-4ebd-9b11-dfb4180e968b-metrics-tls\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.027393 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.030723 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.031237 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.032767 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b98c7"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.040211 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.041233 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-service-ca-bundle\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.043714 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jvpsj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.048070 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5br4b"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.048521 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.053089 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.054301 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rqqwp"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.055792 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.057404 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.058926 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjf8d"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.059833 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.062239 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.063859 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.068296 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.071810 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r7j2r"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.071880 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.073464 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.073611 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.074947 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84dzp"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.076804 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.079295 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.081667 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.082125 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.083286 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4b45h"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.084796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8224e4c0-380b-489a-98d8-ee1b15c1637a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.085003 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.085983 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n294"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.087355 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z8gmg"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.088584 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.089813 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rx8sj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.090987 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.091407 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.091576 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.093075 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.094255 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.095556 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qsnhv"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.096680 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.098457 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.099012 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r7j2r"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.100207 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.101028 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rqqwp"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.102161 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bhjwh"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.103419 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rx8sj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.104922 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbq6z"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.106805 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.108175 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.110993 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129344 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129411 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-images\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdff8\" (UniqueName: \"kubernetes.io/projected/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-kube-api-access-bdff8\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129592 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ee676ac-a60f-4855-949f-d3210f9314f5-proxy-tls\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129680 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129709 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129741 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2ml7\" (UniqueName: \"kubernetes.io/projected/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-kube-api-access-l2ml7\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129806 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129870 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e9af173-4335-4ebd-9b11-dfb4180e968b-metrics-tls\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.129916 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130013 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvlbc\" (UniqueName: \"kubernetes.io/projected/1e893e98-9670-49d0-8312-d78c86a14ba4-kube-api-access-qvlbc\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130086 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130150 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhxr7\" (UniqueName: \"kubernetes.io/projected/4e9af173-4335-4ebd-9b11-dfb4180e968b-kube-api-access-zhxr7\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130192 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgqdc\" (UniqueName: \"kubernetes.io/projected/b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d-kube-api-access-hgqdc\") pod \"migrator-59844c95c7-bpzbh\" (UID: \"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130320 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130356 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.130382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxp9b\" (UniqueName: \"kubernetes.io/projected/9ee676ac-a60f-4855-949f-d3210f9314f5-kube-api-access-lxp9b\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.131074 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.132420 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.142615 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e5e11c7-6a7f-466b-8d59-674bb931db4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.151616 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.158926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.159460 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5e11c7-6a7f-466b-8d59-674bb931db4c-config\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.170612 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.191167 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.197377 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671fcd5f-c44a-46e7-840f-d204d2464822-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.210255 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.230954 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.244582 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671fcd5f-c44a-46e7-840f-d204d2464822-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.250732 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.272950 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.291027 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.310772 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.330963 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.343900 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.351330 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.406270 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8vc\" (UniqueName: \"kubernetes.io/projected/ccfe41db-4509-43cc-a95c-9ac09e6c9390-kube-api-access-lr8vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m55nt\" (UID: \"ccfe41db-4509-43cc-a95c-9ac09e6c9390\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.426087 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7stzk\" (UniqueName: \"kubernetes.io/projected/872ffab3-f760-45e2-a5c8-aa1055f9ab2d-kube-api-access-7stzk\") pod \"openshift-apiserver-operator-796bbdcf4f-jftgj\" (UID: \"872ffab3-f760-45e2-a5c8-aa1055f9ab2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.445579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5r4w\" (UniqueName: \"kubernetes.io/projected/5466b6b9-d1d5-471e-90e7-75f07078f8dc-kube-api-access-k5r4w\") pod \"authentication-operator-69f744f599-b98c7\" (UID: \"5466b6b9-d1d5-471e-90e7-75f07078f8dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.451798 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.470591 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.480869 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ee676ac-a60f-4855-949f-d3210f9314f5-images\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.490330 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.511187 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.526421 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.535715 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.542332 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.551039 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.570912 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.584990 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.591524 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.603391 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ee676ac-a60f-4855-949f-d3210f9314f5-proxy-tls\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.627325 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") pod \"route-controller-manager-6576b87f9c-96t4g\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.632524 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.638473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.652841 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mr74\" (UniqueName: \"kubernetes.io/projected/e74c7e17-c70b-4637-ad47-58e1e192c52e-kube-api-access-5mr74\") pod \"downloads-7954f5f757-4b45h\" (UID: \"e74c7e17-c70b-4637-ad47-58e1e192c52e\") " pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.665337 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc1f149e-4ec4-423a-b94e-bf0923a75bdf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9s279\" (UID: \"cc1f149e-4ec4-423a-b94e-bf0923a75bdf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.708544 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84lv6\" (UniqueName: \"kubernetes.io/projected/3e5362ac-062f-4bf2-a0dc-e96b2750ab52-kube-api-access-84lv6\") pod \"cluster-samples-operator-665b6dd947-6pzt4\" (UID: \"3e5362ac-062f-4bf2-a0dc-e96b2750ab52\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.712928 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") pod \"console-f9d7485db-sf9m8\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.729388 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xvws\" (UniqueName: \"kubernetes.io/projected/063dd8d0-356e-4c11-96fd-6ecee1f28da8-kube-api-access-6xvws\") pod \"machine-api-operator-5694c8668f-5br4b\" (UID: \"063dd8d0-356e-4c11-96fd-6ecee1f28da8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.765081 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.776918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbjg\" (UniqueName: \"kubernetes.io/projected/9f11a2b3-15a4-4358-8604-bf4e6a0d22fe-kube-api-access-pdbjg\") pod \"cluster-image-registry-operator-dc59b4c8b-2ff7x\" (UID: \"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.778562 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.789574 4782 request.go:700] Waited for 1.007236279s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.797411 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.800985 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.813107 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed24c96e-c389-443d-bdcf-b6fd727d472e-proxy-tls\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.827008 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.836446 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n6ww\" (UniqueName: \"kubernetes.io/projected/082079e0-8d5a-4d2e-959e-0366e4787bd5-kube-api-access-8n6ww\") pod \"apiserver-7bbb656c7d-fwkht\" (UID: \"082079e0-8d5a-4d2e-959e-0366e4787bd5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.854152 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77znk\" (UniqueName: \"kubernetes.io/projected/acfa5788-ab19-4e50-bc93-31b7a5069b32-kube-api-access-77znk\") pod \"openshift-config-operator-7777fb866f-7v92z\" (UID: \"acfa5788-ab19-4e50-bc93-31b7a5069b32\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.863878 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.865447 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8sc\" (UniqueName: \"kubernetes.io/projected/1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce-kube-api-access-fl8sc\") pod \"machine-approver-56656f9798-nrz8z\" (UID: \"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.896531 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.896928 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2hdn\" (UniqueName: \"kubernetes.io/projected/fc962b97-f5d3-4673-9a39-8fbf6bc2424f-kube-api-access-w2hdn\") pod \"router-default-5444994796-29qjf\" (UID: \"fc962b97-f5d3-4673-9a39-8fbf6bc2424f\") " pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.921327 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.922464 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") pod \"oauth-openshift-558db77b4-sc7kt\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.922468 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.924398 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt"] Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.931330 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtfk5\" (UniqueName: \"kubernetes.io/projected/1f4f42b8-506a-4922-b7c4-7f77afbb238c-kube-api-access-xtfk5\") pod \"apiserver-76f77b778f-z8gmg\" (UID: \"1f4f42b8-506a-4922-b7c4-7f77afbb238c\") " pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.947993 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.952670 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.952818 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.956785 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.969907 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.972370 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.981933 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.990956 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:11 crc kubenswrapper[4782]: I0202 10:41:11.991730 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.005481 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.021175 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.033255 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.036288 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.065434 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.072851 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.107422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b98c7"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.109433 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.117201 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e9af173-4335-4ebd-9b11-dfb4180e968b-metrics-tls\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.129872 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130098 4782 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130178 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert podName:1e893e98-9670-49d0-8312-d78c86a14ba4 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.63014722 +0000 UTC m=+152.514339936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert") pod "service-ca-operator-777779d784-vtkbj" (UID: "1e893e98-9670-49d0-8312-d78c86a14ba4") : failed to sync secret cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130302 4782 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130391 4782 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130431 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key podName:6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.630386327 +0000 UTC m=+152.514579043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key") pod "service-ca-9c57cc56f-lbq6z" (UID: "6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a") : failed to sync secret cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130454 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle podName:6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.630445179 +0000 UTC m=+152.514637895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle") pod "service-ca-9c57cc56f-lbq6z" (UID: "6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a") : failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130518 4782 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130555 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config podName:1e893e98-9670-49d0-8312-d78c86a14ba4 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.630549502 +0000 UTC m=+152.514742218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config") pod "service-ca-operator-777779d784-vtkbj" (UID: "1e893e98-9670-49d0-8312-d78c86a14ba4") : failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130606 4782 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: E0202 10:41:12.130635 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume podName:9832aa65-d498-4a21-b53a-ebc591328a00 nodeName:}" failed. No retries permitted until 2026-02-02 10:41:12.630623804 +0000 UTC m=+152.514816520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume") pod "collect-profiles-29500470-wxc6r" (UID: "9832aa65-d498-4a21-b53a-ebc591328a00") : failed to sync configmap cache: timed out waiting for the condition Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.131198 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.153807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.159074 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.171325 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.205067 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.227722 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.265753 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671fcd5f-c44a-46e7-840f-d204d2464822-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pb5d\" (UID: \"671fcd5f-c44a-46e7-840f-d204d2464822\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.267114 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5br4b"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.291889 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5e11c7-6a7f-466b-8d59-674bb931db4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-c8k6k\" (UID: \"1e5e11c7-6a7f-466b-8d59-674bb931db4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.320385 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tpw2\" (UniqueName: \"kubernetes.io/projected/86721216-38a9-4b44-8e34-d01a33c39e82-kube-api-access-9tpw2\") pod \"olm-operator-6b444d44fb-pvghb\" (UID: \"86721216-38a9-4b44-8e34-d01a33c39e82\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.332071 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.338124 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqdm\" (UniqueName: \"kubernetes.io/projected/1136cbad-9e47-49e0-a890-83d86d325537-kube-api-access-gmqdm\") pod \"ingress-operator-5b745b69d9-5n294\" (UID: \"1136cbad-9e47-49e0-a890-83d86d325537\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.339991 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") pod \"controller-manager-879f6c89f-l2hps\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.343127 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:12 crc kubenswrapper[4782]: W0202 10:41:12.350633 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063dd8d0_356e_4c11_96fd_6ecee1f28da8.slice/crio-c8c178d9c4a0e7ac543160c2976a898eefe65bf9dec8b46ceade5a6ae8e7975c WatchSource:0}: Error finding container c8c178d9c4a0e7ac543160c2976a898eefe65bf9dec8b46ceade5a6ae8e7975c: Status 404 returned error can't find the container with id c8c178d9c4a0e7ac543160c2976a898eefe65bf9dec8b46ceade5a6ae8e7975c Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.354134 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.371042 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.371756 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.379391 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dch5s\" (UniqueName: \"kubernetes.io/projected/8224e4c0-380b-489a-98d8-ee1b15c1637a-kube-api-access-dch5s\") pod \"multus-admission-controller-857f4d67dd-qjf8d\" (UID: \"8224e4c0-380b-489a-98d8-ee1b15c1637a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.379951 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.393756 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.411268 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:41:12 crc kubenswrapper[4782]: W0202 10:41:12.414134 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc962b97_f5d3_4673_9a39_8fbf6bc2424f.slice/crio-2e820ee4fe2f628794643f76da4ce0ba7698a67d8be228c293acae563df8cf14 WatchSource:0}: Error finding container 2e820ee4fe2f628794643f76da4ce0ba7698a67d8be228c293acae563df8cf14: Status 404 returned error can't find the container with id 2e820ee4fe2f628794643f76da4ce0ba7698a67d8be228c293acae563df8cf14 Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.439069 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.451727 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.473315 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.491005 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.511115 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.534273 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.554589 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.572081 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.594041 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.618367 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.626181 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.631712 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.652179 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.652531 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" event={"ID":"ccfe41db-4509-43cc-a95c-9ac09e6c9390","Type":"ContainerStarted","Data":"23597bd2ae135850271080eb84a45c5c170651bf1de07614c9fcc3d265e2d209"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.654424 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4b45h"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.661773 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" event={"ID":"5466b6b9-d1d5-471e-90e7-75f07078f8dc","Type":"ContainerStarted","Data":"7ef745798ffd0832e9cf5479f64c3f88c407496b0b9deabd7245fee7b0e25744"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.662114 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.664957 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.665021 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.665047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.665088 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.665130 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.669894 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.673237 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-cabundle\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.681378 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.692224 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.697949 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" event={"ID":"872ffab3-f760-45e2-a5c8-aa1055f9ab2d","Type":"ContainerStarted","Data":"6271911877c5d27fdf7f39db98cb85a87486aa11c3c3dda949bcb89bb1ceccf1"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.702384 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-signing-key\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.705320 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" event={"ID":"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce","Type":"ContainerStarted","Data":"734dc99486d2fe1cd2e38327b0aede28b14f3088279e4d36af48a315424ebef0"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.713249 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.716132 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-29qjf" event={"ID":"fc962b97-f5d3-4673-9a39-8fbf6bc2424f","Type":"ContainerStarted","Data":"2e820ee4fe2f628794643f76da4ce0ba7698a67d8be228c293acae563df8cf14"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.726892 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" event={"ID":"063dd8d0-356e-4c11-96fd-6ecee1f28da8","Type":"ContainerStarted","Data":"c8c178d9c4a0e7ac543160c2976a898eefe65bf9dec8b46ceade5a6ae8e7975c"} Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.732077 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.758803 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.771751 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.791452 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.811768 4782 request.go:700] Waited for 1.824582114s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&limit=500&resourceVersion=0 Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.816242 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.831303 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.843192 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e893e98-9670-49d0-8312-d78c86a14ba4-serving-cert\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.853796 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.858963 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7v92z"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.862006 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e893e98-9670-49d0-8312-d78c86a14ba4-config\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.895727 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 10:41:12 crc kubenswrapper[4782]: W0202 10:41:12.909218 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacfa5788_ab19_4e50_bc93_31b7a5069b32.slice/crio-0461ba5d89b8053e3c7c65d9d2a7d102c503b6d372a26d6649ab118ecaa97b9b WatchSource:0}: Error finding container 0461ba5d89b8053e3c7c65d9d2a7d102c503b6d372a26d6649ab118ecaa97b9b: Status 404 returned error can't find the container with id 0461ba5d89b8053e3c7c65d9d2a7d102c503b6d372a26d6649ab118ecaa97b9b Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.923460 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.940396 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.943987 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c29fd\" (UniqueName: \"kubernetes.io/projected/ed24c96e-c389-443d-bdcf-b6fd727d472e-kube-api-access-c29fd\") pod \"machine-config-controller-84d6567774-59ctg\" (UID: \"ed24c96e-c389-443d-bdcf-b6fd727d472e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.953449 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.971933 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht"] Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.972263 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 10:41:12 crc kubenswrapper[4782]: I0202 10:41:12.993245 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.012119 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.021517 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.032259 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.035449 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.058914 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.094558 4782 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.094961 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.113980 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.130362 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.136211 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.157398 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.167322 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.169516 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.180012 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z8gmg"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.180313 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.214686 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:41:13 crc kubenswrapper[4782]: W0202 10:41:13.218721 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f11a2b3_15a4_4358_8604_bf4e6a0d22fe.slice/crio-687fb2d6f7baa7a9fd23d52bd43550811e7856da390ff3aad42ac715126bb37f WatchSource:0}: Error finding container 687fb2d6f7baa7a9fd23d52bd43550811e7856da390ff3aad42ac715126bb37f: Status 404 returned error can't find the container with id 687fb2d6f7baa7a9fd23d52bd43550811e7856da390ff3aad42ac715126bb37f Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.219613 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") pod \"collect-profiles-29500470-wxc6r\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.244458 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdff8\" (UniqueName: \"kubernetes.io/projected/2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1-kube-api-access-bdff8\") pod \"control-plane-machine-set-operator-78cbb6b69f-wqm6f\" (UID: \"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.256038 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2ml7\" (UniqueName: \"kubernetes.io/projected/6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a-kube-api-access-l2ml7\") pod \"service-ca-9c57cc56f-lbq6z\" (UID: \"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.278242 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvlbc\" (UniqueName: \"kubernetes.io/projected/1e893e98-9670-49d0-8312-d78c86a14ba4-kube-api-access-qvlbc\") pod \"service-ca-operator-777779d784-vtkbj\" (UID: \"1e893e98-9670-49d0-8312-d78c86a14ba4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.296292 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhxr7\" (UniqueName: \"kubernetes.io/projected/4e9af173-4335-4ebd-9b11-dfb4180e968b-kube-api-access-zhxr7\") pod \"dns-operator-744455d44c-bhjwh\" (UID: \"4e9af173-4335-4ebd-9b11-dfb4180e968b\") " pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.300767 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.306795 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.313290 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.334006 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgqdc\" (UniqueName: \"kubernetes.io/projected/b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d-kube-api-access-hgqdc\") pod \"migrator-59844c95c7-bpzbh\" (UID: \"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.337726 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") pod \"marketplace-operator-79b997595-dsb8s\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.349115 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxp9b\" (UniqueName: \"kubernetes.io/projected/9ee676ac-a60f-4855-949f-d3210f9314f5-kube-api-access-lxp9b\") pod \"machine-config-operator-74547568cd-l59tj\" (UID: \"9ee676ac-a60f-4855-949f-d3210f9314f5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.361126 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.385376 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.385531 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.388706 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.405329 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408082 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mgs\" (UniqueName: \"kubernetes.io/projected/04bfeb66-d53c-4263-a149-e7e1d705f9d1-kube-api-access-f2mgs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4471bb99-24c2-45b0-bb05-3f3d59191e12-serving-cert\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408180 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-profile-collector-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408222 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408251 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408298 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408317 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-serving-cert\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408389 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-config\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408481 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408502 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408518 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5w5\" (UniqueName: \"kubernetes.io/projected/e457712f-8cc5-4167-b074-cd8713eb9989-kube-api-access-qp5w5\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408585 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408687 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-trusted-ca\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408714 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408771 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408806 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-service-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408838 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxmhg\" (UniqueName: \"kubernetes.io/projected/1180fe74-10a3-4aa0-b205-7f47597ef9b3-kube-api-access-nxmhg\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1180fe74-10a3-4aa0-b205-7f47597ef9b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.408930 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-apiservice-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409042 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409061 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zpv\" (UniqueName: \"kubernetes.io/projected/40828191-5926-42ba-b84d-5737181b97e5-kube-api-access-w4zpv\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409104 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-etcd-client\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409133 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdtvs\" (UniqueName: \"kubernetes.io/projected/4471bb99-24c2-45b0-bb05-3f3d59191e12-kube-api-access-hdtvs\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk59v\" (UniqueName: \"kubernetes.io/projected/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-kube-api-access-kk59v\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409211 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-config\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409292 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-webhook-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409308 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409359 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-srv-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.409377 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04bfeb66-d53c-4263-a149-e7e1d705f9d1-tmpfs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.414970 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:13.914319223 +0000 UTC m=+153.798511939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.442235 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.490304 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511250 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511587 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-registration-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511683 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-serving-cert\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511786 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wtfq\" (UniqueName: \"kubernetes.io/projected/db132fa2-cd84-4b44-b523-48b1af9f6f73-kube-api-access-6wtfq\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511815 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-config\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511849 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511905 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp5w5\" (UniqueName: \"kubernetes.io/projected/e457712f-8cc5-4167-b074-cd8713eb9989-kube-api-access-qp5w5\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511947 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511975 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.511998 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-certs\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512044 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-csi-data-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512083 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-trusted-ca\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512107 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512175 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512205 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-service-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfxf\" (UniqueName: \"kubernetes.io/projected/e3487bc1-e5e6-4f19-9d79-86176f5b9689-kube-api-access-4kfxf\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd7961-c147-4fad-b4b7-75f5567976f2-config-volume\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512304 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxmhg\" (UniqueName: \"kubernetes.io/projected/1180fe74-10a3-4aa0-b205-7f47597ef9b3-kube-api-access-nxmhg\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512331 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54pr\" (UniqueName: \"kubernetes.io/projected/d11d8c73-fe90-48c3-be77-b066aa57cacc-kube-api-access-v54pr\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512375 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1180fe74-10a3-4aa0-b205-7f47597ef9b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512408 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-apiservice-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3487bc1-e5e6-4f19-9d79-86176f5b9689-cert\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512520 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512548 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zpv\" (UniqueName: \"kubernetes.io/projected/40828191-5926-42ba-b84d-5737181b97e5-kube-api-access-w4zpv\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512576 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-node-bootstrap-token\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512602 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-etcd-client\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512628 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdtvs\" (UniqueName: \"kubernetes.io/projected/4471bb99-24c2-45b0-bb05-3f3d59191e12-kube-api-access-hdtvs\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512670 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9fd7961-c147-4fad-b4b7-75f5567976f2-metrics-tls\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512697 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk59v\" (UniqueName: \"kubernetes.io/projected/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-kube-api-access-kk59v\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512737 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-config\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512757 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-plugins-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512849 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-webhook-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512880 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.512997 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-mountpoint-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-srv-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513068 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-socket-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513094 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04bfeb66-d53c-4263-a149-e7e1d705f9d1-tmpfs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513157 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mgs\" (UniqueName: \"kubernetes.io/projected/04bfeb66-d53c-4263-a149-e7e1d705f9d1-kube-api-access-f2mgs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513184 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8gn4\" (UniqueName: \"kubernetes.io/projected/d9fd7961-c147-4fad-b4b7-75f5567976f2-kube-api-access-v8gn4\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.513211 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4471bb99-24c2-45b0-bb05-3f3d59191e12-serving-cert\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.515316 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.518800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/04bfeb66-d53c-4263-a149-e7e1d705f9d1-tmpfs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.519405 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.520478 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.520566 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-profile-collector-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.520602 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.521165 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-trusted-ca\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.521338 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-etcd-service-ca\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.524918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40828191-5926-42ba-b84d-5737181b97e5-config\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.535410 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-serving-cert\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.540745 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-srv-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.541512 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.541712 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4471bb99-24c2-45b0-bb05-3f3d59191e12-config\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.541774 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.542122 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40828191-5926-42ba-b84d-5737181b97e5-etcd-client\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.542724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.547279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4471bb99-24c2-45b0-bb05-3f3d59191e12-serving-cert\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.550771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.551001 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-webhook-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.551450 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.051371229 +0000 UTC m=+153.935563945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.551675 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04bfeb66-d53c-4263-a149-e7e1d705f9d1-apiservice-cert\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.557944 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e457712f-8cc5-4167-b074-cd8713eb9989-profile-collector-cert\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.567351 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.569337 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5n294"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.569910 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1180fe74-10a3-4aa0-b205-7f47597ef9b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.592536 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zpv\" (UniqueName: \"kubernetes.io/projected/40828191-5926-42ba-b84d-5737181b97e5-kube-api-access-w4zpv\") pod \"etcd-operator-b45778765-84dzp\" (UID: \"40828191-5926-42ba-b84d-5737181b97e5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.592616 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.593081 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.601444 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk59v\" (UniqueName: \"kubernetes.io/projected/7391fe7f-58d0-4947-b2e3-32b1cd1cb01d-kube-api-access-kk59v\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ttkr\" (UID: \"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.603004 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qjf8d"] Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.613208 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621825 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-registration-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wtfq\" (UniqueName: \"kubernetes.io/projected/db132fa2-cd84-4b44-b523-48b1af9f6f73-kube-api-access-6wtfq\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621973 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-certs\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.621998 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-csi-data-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kfxf\" (UniqueName: \"kubernetes.io/projected/e3487bc1-e5e6-4f19-9d79-86176f5b9689-kube-api-access-4kfxf\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622069 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd7961-c147-4fad-b4b7-75f5567976f2-config-volume\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622098 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54pr\" (UniqueName: \"kubernetes.io/projected/d11d8c73-fe90-48c3-be77-b066aa57cacc-kube-api-access-v54pr\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3487bc1-e5e6-4f19-9d79-86176f5b9689-cert\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622141 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-node-bootstrap-token\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9fd7961-c147-4fad-b4b7-75f5567976f2-metrics-tls\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622178 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-plugins-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622200 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-mountpoint-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-socket-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8gn4\" (UniqueName: \"kubernetes.io/projected/d9fd7961-c147-4fad-b4b7-75f5567976f2-kube-api-access-v8gn4\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.622746 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-registration-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.623389 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-csi-data-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.623425 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.123406259 +0000 UTC m=+154.007598975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.623862 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-mountpoint-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.623867 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-plugins-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.624062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db132fa2-cd84-4b44-b523-48b1af9f6f73-socket-dir\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.624214 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mgs\" (UniqueName: \"kubernetes.io/projected/04bfeb66-d53c-4263-a149-e7e1d705f9d1-kube-api-access-f2mgs\") pod \"packageserver-d55dfcdfc-6x4zp\" (UID: \"04bfeb66-d53c-4263-a149-e7e1d705f9d1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.624919 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd7961-c147-4fad-b4b7-75f5567976f2-config-volume\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.628142 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.630670 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-node-bootstrap-token\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.631086 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9fd7961-c147-4fad-b4b7-75f5567976f2-metrics-tls\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.633282 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3487bc1-e5e6-4f19-9d79-86176f5b9689-cert\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.633811 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d11d8c73-fe90-48c3-be77-b066aa57cacc-certs\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.640721 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.651750 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp5w5\" (UniqueName: \"kubernetes.io/projected/e457712f-8cc5-4167-b074-cd8713eb9989-kube-api-access-qp5w5\") pod \"catalog-operator-68c6474976-x2mbg\" (UID: \"e457712f-8cc5-4167-b074-cd8713eb9989\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.673432 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.690983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxmhg\" (UniqueName: \"kubernetes.io/projected/1180fe74-10a3-4aa0-b205-7f47597ef9b3-kube-api-access-nxmhg\") pod \"package-server-manager-789f6589d5-f8xts\" (UID: \"1180fe74-10a3-4aa0-b205-7f47597ef9b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.698892 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.709901 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdtvs\" (UniqueName: \"kubernetes.io/projected/4471bb99-24c2-45b0-bb05-3f3d59191e12-kube-api-access-hdtvs\") pod \"console-operator-58897d9998-qsnhv\" (UID: \"4471bb99-24c2-45b0-bb05-3f3d59191e12\") " pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.715197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.725763 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.726308 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8gn4\" (UniqueName: \"kubernetes.io/projected/d9fd7961-c147-4fad-b4b7-75f5567976f2-kube-api-access-v8gn4\") pod \"dns-default-rx8sj\" (UID: \"d9fd7961-c147-4fad-b4b7-75f5567976f2\") " pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.726530 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.726623 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.226582708 +0000 UTC m=+154.110775424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.740372 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" event={"ID":"8224e4c0-380b-489a-98d8-ee1b15c1637a","Type":"ContainerStarted","Data":"d5567bbd4eeb150c519d43cab06c78ecb484659f0ed4fc3c5811bd160745dafc"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.746307 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" event={"ID":"cc1f149e-4ec4-423a-b94e-bf0923a75bdf","Type":"ContainerStarted","Data":"1fb96a86c69b5b50f9cceae102b5c65fadf412c9575f5bc2d83c29442bb3c286"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.747248 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" event={"ID":"86721216-38a9-4b44-8e34-d01a33c39e82","Type":"ContainerStarted","Data":"3628b137d2d83efa460ef5539b666645db067ea4425c2e7efb22b4f0ec56fcc5"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.752141 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54pr\" (UniqueName: \"kubernetes.io/projected/d11d8c73-fe90-48c3-be77-b066aa57cacc-kube-api-access-v54pr\") pod \"machine-config-server-jvpsj\" (UID: \"d11d8c73-fe90-48c3-be77-b066aa57cacc\") " pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.755036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" event={"ID":"5466b6b9-d1d5-471e-90e7-75f07078f8dc","Type":"ContainerStarted","Data":"b87f2c5487c1eeb777a70064742d269f6bd2ef8e07f38b420c016c73d76393c8"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.758333 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" event={"ID":"082079e0-8d5a-4d2e-959e-0366e4787bd5","Type":"ContainerStarted","Data":"b9464c36501b143c6fdc51ced7bea02d503e0b02a5314a8a2e3ef93e0c79e5a1"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.759950 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" event={"ID":"59a1b37a-9035-459b-a485-280325d33264","Type":"ContainerStarted","Data":"43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.759988 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" event={"ID":"59a1b37a-9035-459b-a485-280325d33264","Type":"ContainerStarted","Data":"1bed6a14af1c27e28bfaf20957b4ab6debdecb60fbd87716abbb4a3205ddb87a"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.763678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" event={"ID":"063dd8d0-356e-4c11-96fd-6ecee1f28da8","Type":"ContainerStarted","Data":"817dc238f91cf819a8009c1a6eff870c565f9e77876230a572f763d46f20197e"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.763742 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" event={"ID":"063dd8d0-356e-4c11-96fd-6ecee1f28da8","Type":"ContainerStarted","Data":"daa0bdca97ae3b6c22432214d3a6fff56d4ae3e6ae6e05e0805afbd87d88c210"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.768316 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wtfq\" (UniqueName: \"kubernetes.io/projected/db132fa2-cd84-4b44-b523-48b1af9f6f73-kube-api-access-6wtfq\") pod \"csi-hostpathplugin-r7j2r\" (UID: \"db132fa2-cd84-4b44-b523-48b1af9f6f73\") " pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.769060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" event={"ID":"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe","Type":"ContainerStarted","Data":"687fb2d6f7baa7a9fd23d52bd43550811e7856da390ff3aad42ac715126bb37f"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.773297 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.785901 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" event={"ID":"872ffab3-f760-45e2-a5c8-aa1055f9ab2d","Type":"ContainerStarted","Data":"d72307d69f602e1ff26c8bbc8c21c16eb7edeb1b416c53a98c63744c491fce60"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.786951 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kfxf\" (UniqueName: \"kubernetes.io/projected/e3487bc1-e5e6-4f19-9d79-86176f5b9689-kube-api-access-4kfxf\") pod \"ingress-canary-rqqwp\" (UID: \"e3487bc1-e5e6-4f19-9d79-86176f5b9689\") " pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.793859 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" event={"ID":"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc","Type":"ContainerStarted","Data":"e1d3c6b879e919d9a8eeb6fc928bb73b1f6f10789e409d2d56a047e2a54eac9e"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.809512 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.817790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4b45h" event={"ID":"e74c7e17-c70b-4637-ad47-58e1e192c52e","Type":"ContainerStarted","Data":"10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.817852 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4b45h" event={"ID":"e74c7e17-c70b-4637-ad47-58e1e192c52e","Type":"ContainerStarted","Data":"866c9e4ea060d3590180ce1cec057596aa8bcbf4e01b778a768a521ce79fceb7"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.818562 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.820031 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" event={"ID":"1f4f42b8-506a-4922-b7c4-7f77afbb238c","Type":"ContainerStarted","Data":"6cb750d8f6e27611724e0ea1f017a2846617e985d57d9fe4f1da057656e28495"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.820929 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.820987 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.830580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.832287 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" event={"ID":"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce","Type":"ContainerStarted","Data":"86f8a20613675160f704acae6ec62fb56aeac8a22285b87dc22bd713e9f4fd9c"} Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.832716 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.332697021 +0000 UTC m=+154.216889737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.835041 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" event={"ID":"ccfe41db-4509-43cc-a95c-9ac09e6c9390","Type":"ContainerStarted","Data":"abd0c83cc403dcb1d0f98f3306260bba6570eb49eb377b7c92717f14fd267e4f"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.837441 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" event={"ID":"ed24c96e-c389-443d-bdcf-b6fd727d472e","Type":"ContainerStarted","Data":"e6bcb3c3a4317261a64251a90479a74e7b78a949462b94a6651e31463ec332dc"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.860196 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" event={"ID":"3e5362ac-062f-4bf2-a0dc-e96b2750ab52","Type":"ContainerStarted","Data":"ee9de80d00973c840fc5b964ea74298a58d60b1e718bafe4377156baab611d49"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.864848 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" event={"ID":"03d47200-aed2-431d-89fd-c27cdd91564f","Type":"ContainerStarted","Data":"3ff1f99d47a76aef7148a44cb594fe9fccb90137af935d28016f31e0538f0f1c"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.886984 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" event={"ID":"1e5e11c7-6a7f-466b-8d59-674bb931db4c","Type":"ContainerStarted","Data":"28e6524c0e66d885bbe200b6dfb3f02338992324346cef2b6fa2bd6754e0330c"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.897675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" event={"ID":"671fcd5f-c44a-46e7-840f-d204d2464822","Type":"ContainerStarted","Data":"a1a4a81ae79b8b2d14e735f0023db45c9b575a45ef2a77f622fb384d724b48b9"} Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.932271 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:13 crc kubenswrapper[4782]: E0202 10:41:13.933973 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.433933574 +0000 UTC m=+154.318126330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.953055 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.953923 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:13 crc kubenswrapper[4782]: I0202 10:41:13.977527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sf9m8" event={"ID":"76afda26-696c-4996-bc58-1c928e4fa92a","Type":"ContainerStarted","Data":"a512fcceae6cfaeaad197794b5b6c708f15cf79898b3102381c39333768e348a"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.000060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" event={"ID":"acfa5788-ab19-4e50-bc93-31b7a5069b32","Type":"ContainerStarted","Data":"921233a570c55bc883c3564dff261aca39ab0ae30aac10af39e867fb0a53b2da"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.000128 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" event={"ID":"acfa5788-ab19-4e50-bc93-31b7a5069b32","Type":"ContainerStarted","Data":"0461ba5d89b8053e3c7c65d9d2a7d102c503b6d372a26d6649ab118ecaa97b9b"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.006310 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-29qjf" event={"ID":"fc962b97-f5d3-4673-9a39-8fbf6bc2424f","Type":"ContainerStarted","Data":"7d308460ab001244221375cbec47c4b889163b2f3295f1c1f6a02f8a9fbe57eb"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.016279 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" event={"ID":"1136cbad-9e47-49e0-a890-83d86d325537","Type":"ContainerStarted","Data":"049e08b1c9c5f8aae7d828e291ec4f9d2fdce967f94545a1fa74bd6467d9dd95"} Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.033873 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.034333 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.534316792 +0000 UTC m=+154.418509508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.036897 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.039881 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.039983 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.045105 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jvpsj" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.051062 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rqqwp" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.074962 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bhjwh"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.135563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.135804 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.63575989 +0000 UTC m=+154.519952616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.135883 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.138358 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.638325654 +0000 UTC m=+154.522518540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.155767 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.207584 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5br4b" podStartSLOduration=126.207544503 podStartE2EDuration="2m6.207544503s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:14.20572041 +0000 UTC m=+154.089913126" watchObservedRunningTime="2026-02-02 10:41:14.207544503 +0000 UTC m=+154.091737219" Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.237606 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.238169 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.738141196 +0000 UTC m=+154.622333912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.238504 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.296451 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbq6z"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.307376 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.313772 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.354955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.355456 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.855435212 +0000 UTC m=+154.739627928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.370523 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m55nt" podStartSLOduration=126.370497557 podStartE2EDuration="2m6.370497557s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:14.36921909 +0000 UTC m=+154.253411806" watchObservedRunningTime="2026-02-02 10:41:14.370497557 +0000 UTC m=+154.254690273" Feb 02 10:41:14 crc kubenswrapper[4782]: W0202 10:41:14.427121 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e893e98_9670_49d0_8312_d78c86a14ba4.slice/crio-809ced7a26c02c28b5525a5b5a64796c9293444511209c47a3b495bd53ca823a WatchSource:0}: Error finding container 809ced7a26c02c28b5525a5b5a64796c9293444511209c47a3b495bd53ca823a: Status 404 returned error can't find the container with id 809ced7a26c02c28b5525a5b5a64796c9293444511209c47a3b495bd53ca823a Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.456278 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.456476 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.956442848 +0000 UTC m=+154.840635574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.456587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.457099 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:14.957079197 +0000 UTC m=+154.841271913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.504322 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh"] Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.558564 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.559405 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.05936102 +0000 UTC m=+154.943553736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: W0202 10:41:14.609247 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdb9068_c8eb_4a1d_b4ab_c3f2ed70e4c1.slice/crio-7916cc388dcb22b6be0d597f62adf1c812e9093f673dd8609e03b65cbf05318a WatchSource:0}: Error finding container 7916cc388dcb22b6be0d597f62adf1c812e9093f673dd8609e03b65cbf05318a: Status 404 returned error can't find the container with id 7916cc388dcb22b6be0d597f62adf1c812e9093f673dd8609e03b65cbf05318a Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.660423 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.660915 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.160892561 +0000 UTC m=+155.045085277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: W0202 10:41:14.702101 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb925d6b9_8b5c_4407_bd7b_9ddcbc62d78d.slice/crio-3c914bce15811dddff10b6d68d66129a18313510803ead0a043bdc7959aa2d89 WatchSource:0}: Error finding container 3c914bce15811dddff10b6d68d66129a18313510803ead0a043bdc7959aa2d89: Status 404 returned error can't find the container with id 3c914bce15811dddff10b6d68d66129a18313510803ead0a043bdc7959aa2d89 Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.761194 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.761743 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.261720041 +0000 UTC m=+155.145912757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.862780 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.863971 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.363953693 +0000 UTC m=+155.248146409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.967518 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.968090 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.468067719 +0000 UTC m=+155.352260435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.968700 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:14 crc kubenswrapper[4782]: E0202 10:41:14.969217 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.469198221 +0000 UTC m=+155.353390937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:14 crc kubenswrapper[4782]: I0202 10:41:14.997460 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b98c7" podStartSLOduration=127.997412856 podStartE2EDuration="2m7.997412856s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:14.95634353 +0000 UTC m=+154.840536246" watchObservedRunningTime="2026-02-02 10:41:14.997412856 +0000 UTC m=+154.881605572" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.021108 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4b45h" podStartSLOduration=127.021065689 podStartE2EDuration="2m7.021065689s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.018838114 +0000 UTC m=+154.903030830" watchObservedRunningTime="2026-02-02 10:41:15.021065689 +0000 UTC m=+154.905258405" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.042700 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.043064 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.062565 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jftgj" podStartSLOduration=128.062532046 podStartE2EDuration="2m8.062532046s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.054319219 +0000 UTC m=+154.938511935" watchObservedRunningTime="2026-02-02 10:41:15.062532046 +0000 UTC m=+154.946724762" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.073097 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.080964 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" event={"ID":"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a","Type":"ContainerStarted","Data":"a8320cde34b625168e4692cde9b3454dccaeabc829174b47d64085d2f5b45873"} Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.101300 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.601243893 +0000 UTC m=+155.485436609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.128908 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" event={"ID":"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d","Type":"ContainerStarted","Data":"3c914bce15811dddff10b6d68d66129a18313510803ead0a043bdc7959aa2d89"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.131062 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerStarted","Data":"01e32062d069a57210dfb3c4675630b56cd608a941ddd39a5505e8107646b05b"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.151337 4782 generic.go:334] "Generic (PLEG): container finished" podID="acfa5788-ab19-4e50-bc93-31b7a5069b32" containerID="921233a570c55bc883c3564dff261aca39ab0ae30aac10af39e867fb0a53b2da" exitCode=0 Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.151487 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" event={"ID":"acfa5788-ab19-4e50-bc93-31b7a5069b32","Type":"ContainerDied","Data":"921233a570c55bc883c3564dff261aca39ab0ae30aac10af39e867fb0a53b2da"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.160056 4782 generic.go:334] "Generic (PLEG): container finished" podID="082079e0-8d5a-4d2e-959e-0366e4787bd5" containerID="ae210efe35521adeb523a0ad70993b1caf4264aaadaf47536448cef5635fce63" exitCode=0 Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.160162 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" event={"ID":"082079e0-8d5a-4d2e-959e-0366e4787bd5","Type":"ContainerDied","Data":"ae210efe35521adeb523a0ad70993b1caf4264aaadaf47536448cef5635fce63"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.162672 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" event={"ID":"9f11a2b3-15a4-4358-8604-bf4e6a0d22fe","Type":"ContainerStarted","Data":"0c77fbef295a9ce71e979d858238a9e4d89ec3bbb50cfdba6e4a858df0d23e23"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.207107 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" event={"ID":"4e9af173-4335-4ebd-9b11-dfb4180e968b","Type":"ContainerStarted","Data":"0b432bd836bc2fca5309bd46ebb2c520232f7e3ae9b4aba53e4b7c48b69410ea"} Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.229234 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.729194587 +0000 UTC m=+155.613387293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.229707 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.252416 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84dzp"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.331418 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.331870 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.831842521 +0000 UTC m=+155.716035227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.332085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.336326 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.836299909 +0000 UTC m=+155.720492625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.349263 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" event={"ID":"cc1f149e-4ec4-423a-b94e-bf0923a75bdf","Type":"ContainerStarted","Data":"675acde6c4f65e65a92a61e36467a7b8e1bcf1c0911ce0ce962b3a6a63fd17e5"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.419538 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" event={"ID":"9832aa65-d498-4a21-b53a-ebc591328a00","Type":"ContainerStarted","Data":"366cbdbf60f3a0bf6cbbfb63bc5e1d0a80ef38264552b527daf3339d1fbe1798"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.432945 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.435218 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:15.935185564 +0000 UTC m=+155.819378280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.436485 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.446083 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-29qjf" podStartSLOduration=127.446023747 podStartE2EDuration="2m7.446023747s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.428076219 +0000 UTC m=+155.312268935" watchObservedRunningTime="2026-02-02 10:41:15.446023747 +0000 UTC m=+155.330216463" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.449006 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.553472 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.554917 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qsnhv"] Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.555404 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.055381434 +0000 UTC m=+155.939574230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.564245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" event={"ID":"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1","Type":"ContainerStarted","Data":"7916cc388dcb22b6be0d597f62adf1c812e9093f673dd8609e03b65cbf05318a"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.580895 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" event={"ID":"1e893e98-9670-49d0-8312-d78c86a14ba4","Type":"ContainerStarted","Data":"809ced7a26c02c28b5525a5b5a64796c9293444511209c47a3b495bd53ca823a"} Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.581594 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.590216 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.659010 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.660849 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" podStartSLOduration=127.660826628 podStartE2EDuration="2m7.660826628s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.646781913 +0000 UTC m=+155.530974629" watchObservedRunningTime="2026-02-02 10:41:15.660826628 +0000 UTC m=+155.545019344" Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.661601 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.16158329 +0000 UTC m=+156.045776006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: W0202 10:41:15.674935 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1180fe74_10a3_4aa0_b205_7f47597ef9b3.slice/crio-609183786e2387bef677fcfe3211e6a646ccb844ae0f5b2d1b485aee70e57e15 WatchSource:0}: Error finding container 609183786e2387bef677fcfe3211e6a646ccb844ae0f5b2d1b485aee70e57e15: Status 404 returned error can't find the container with id 609183786e2387bef677fcfe3211e6a646ccb844ae0f5b2d1b485aee70e57e15 Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.695959 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2ff7x" podStartSLOduration=127.695931712 podStartE2EDuration="2m7.695931712s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.694396237 +0000 UTC m=+155.578588953" watchObservedRunningTime="2026-02-02 10:41:15.695931712 +0000 UTC m=+155.580124418" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.760251 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9s279" podStartSLOduration=127.760224568 podStartE2EDuration="2m7.760224568s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.756618384 +0000 UTC m=+155.640811100" watchObservedRunningTime="2026-02-02 10:41:15.760224568 +0000 UTC m=+155.644417284" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.763749 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.764163 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.26414532 +0000 UTC m=+156.148338046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.796578 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-r7j2r"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.861485 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rx8sj"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.865392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.865878 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.365856256 +0000 UTC m=+156.250048972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.882522 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" podStartSLOduration=127.882485636 podStartE2EDuration="2m7.882485636s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:15.85213801 +0000 UTC m=+155.736330726" watchObservedRunningTime="2026-02-02 10:41:15.882485636 +0000 UTC m=+155.766678352" Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.887511 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.920193 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp"] Feb 02 10:41:15 crc kubenswrapper[4782]: I0202 10:41:15.967839 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:15 crc kubenswrapper[4782]: E0202 10:41:15.968305 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.468289314 +0000 UTC m=+156.352482030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.071386 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.071981 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.571957136 +0000 UTC m=+156.456149852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.188131 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:16 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:16 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:16 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.188611 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.209148 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.209759 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.709737974 +0000 UTC m=+156.593930690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.314367 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.314760 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.814716125 +0000 UTC m=+156.698908831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.319206 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.319691 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.819670788 +0000 UTC m=+156.703863504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.341315 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rqqwp"] Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.432360 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.432795 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.932770903 +0000 UTC m=+156.816963619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.432915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.433385 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:16.93337554 +0000 UTC m=+156.817568256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.518102 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg"] Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.539273 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.540007 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.039968568 +0000 UTC m=+156.924161284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.624825 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" event={"ID":"40828191-5926-42ba-b84d-5737181b97e5","Type":"ContainerStarted","Data":"6714b6b7956b6309c079a11da7e25c5269bbe368016ee25c48bf02265e924da4"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.643722 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.644247 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.144229978 +0000 UTC m=+157.028422694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.653181 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sf9m8" event={"ID":"76afda26-696c-4996-bc58-1c928e4fa92a","Type":"ContainerStarted","Data":"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.694357 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sf9m8" podStartSLOduration=128.694324564 podStartE2EDuration="2m8.694324564s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:16.693951053 +0000 UTC m=+156.578143769" watchObservedRunningTime="2026-02-02 10:41:16.694324564 +0000 UTC m=+156.578517280" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.732461 4782 generic.go:334] "Generic (PLEG): container finished" podID="1f4f42b8-506a-4922-b7c4-7f77afbb238c" containerID="34e7f8d57e8c9375f8c9398e6d4cd370ed725dc5ed5f6555c8a512aa9090cea5" exitCode=0 Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.732824 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" event={"ID":"1f4f42b8-506a-4922-b7c4-7f77afbb238c","Type":"ContainerDied","Data":"34e7f8d57e8c9375f8c9398e6d4cd370ed725dc5ed5f6555c8a512aa9090cea5"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.751443 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.754743 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.254700137 +0000 UTC m=+157.138892853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.755924 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.758000 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.257983552 +0000 UTC m=+157.142176348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.809775 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" event={"ID":"1ca4c1e0-8b33-49fe-9f13-22feb88fd1ce","Type":"ContainerStarted","Data":"2e679f2a123f3ea3c808d877619f195af22f10478050a25694f4c7bd93e60016"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.855304 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jvpsj" event={"ID":"d11d8c73-fe90-48c3-be77-b066aa57cacc","Type":"ContainerStarted","Data":"2ace9e1bf16756d2ae740c0dadc00576d3529dd94d54e5892dd754bf57288489"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.855792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" event={"ID":"9ee676ac-a60f-4855-949f-d3210f9314f5","Type":"ContainerStarted","Data":"aa584d7d950044cd4ffbd586e9a45da0355e4894887e2a530ac6cc37db172038"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.855806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" event={"ID":"1180fe74-10a3-4aa0-b205-7f47597ef9b3","Type":"ContainerStarted","Data":"609183786e2387bef677fcfe3211e6a646ccb844ae0f5b2d1b485aee70e57e15"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.861088 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.864366 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.364332012 +0000 UTC m=+157.248524728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.913393 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" event={"ID":"1e5e11c7-6a7f-466b-8d59-674bb931db4c","Type":"ContainerStarted","Data":"4c5f44d3fab84c230a91f9e50e8922318b2da22446b29707de66df8a793b9b0f"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.939567 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" event={"ID":"86721216-38a9-4b44-8e34-d01a33c39e82","Type":"ContainerStarted","Data":"c784413eb79fdf358b73df71b8841a880b3a4fbb7d789d856ce2403a59d79fc9"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.940268 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.942880 4782 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pvghb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.942919 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" podUID="86721216-38a9-4b44-8e34-d01a33c39e82" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.950784 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" event={"ID":"6e2ef5a7-dbcc-4ba6-ae0a-fc7a7146af7a","Type":"ContainerStarted","Data":"3c0cd563542e52112f85015a41ab2b892d7ecd47f9d6571f9e5bdc03673d82e2"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.965923 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:16 crc kubenswrapper[4782]: E0202 10:41:16.966307 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.466267575 +0000 UTC m=+157.350460471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.977218 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rx8sj" event={"ID":"d9fd7961-c147-4fad-b4b7-75f5567976f2","Type":"ContainerStarted","Data":"d0b1d1249ae95523d0bb7cd985db2b5c7d0b50a7db66d727ea69904fee5552df"} Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.979821 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrz8z" podStartSLOduration=129.979795455 podStartE2EDuration="2m9.979795455s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:16.854027914 +0000 UTC m=+156.738220630" watchObservedRunningTime="2026-02-02 10:41:16.979795455 +0000 UTC m=+156.863988171" Feb 02 10:41:16 crc kubenswrapper[4782]: I0202 10:41:16.980462 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-c8k6k" podStartSLOduration=128.980454934 podStartE2EDuration="2m8.980454934s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:16.977776287 +0000 UTC m=+156.861969023" watchObservedRunningTime="2026-02-02 10:41:16.980454934 +0000 UTC m=+156.864647670" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:16.999913 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerStarted","Data":"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.000604 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.002287 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsb8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.002327 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.035453 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" event={"ID":"3e5362ac-062f-4bf2-a0dc-e96b2750ab52","Type":"ContainerStarted","Data":"b04ac8253f1360623e90bc90b7f2cc920ce7d2b40dbfa6d465169a37dedfa6dd"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.037728 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" event={"ID":"03d47200-aed2-431d-89fd-c27cdd91564f","Type":"ContainerStarted","Data":"df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.045038 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.048208 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:17 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:17 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:17 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.048279 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.060429 4782 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sc7kt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.061064 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.072742 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.074152 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.574131909 +0000 UTC m=+157.458324625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.090069 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lbq6z" podStartSLOduration=129.090030388 podStartE2EDuration="2m9.090030388s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.070746971 +0000 UTC m=+156.954939707" watchObservedRunningTime="2026-02-02 10:41:17.090030388 +0000 UTC m=+156.974223104" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.090497 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" podStartSLOduration=129.090490021 podStartE2EDuration="2m9.090490021s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.027880073 +0000 UTC m=+156.912072799" watchObservedRunningTime="2026-02-02 10:41:17.090490021 +0000 UTC m=+156.974682737" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.136999 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" podStartSLOduration=130.136977993 podStartE2EDuration="2m10.136977993s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.135936573 +0000 UTC m=+157.020129289" watchObservedRunningTime="2026-02-02 10:41:17.136977993 +0000 UTC m=+157.021170709" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.176686 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.176981 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.676969228 +0000 UTC m=+157.561161944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.185228 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" event={"ID":"1e893e98-9670-49d0-8312-d78c86a14ba4","Type":"ContainerStarted","Data":"19c52582d9d6503c8512b7942d993ecb6474039013418ab9733501cd405c2f51"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.186582 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podStartSLOduration=129.186560835 podStartE2EDuration="2m9.186560835s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.186482302 +0000 UTC m=+157.070675018" watchObservedRunningTime="2026-02-02 10:41:17.186560835 +0000 UTC m=+157.070753551" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.258347 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vtkbj" podStartSLOduration=129.258305946 podStartE2EDuration="2m9.258305946s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.234404136 +0000 UTC m=+157.118596862" watchObservedRunningTime="2026-02-02 10:41:17.258305946 +0000 UTC m=+157.142498662" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.260665 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" event={"ID":"ed24c96e-c389-443d-bdcf-b6fd727d472e","Type":"ContainerStarted","Data":"e037096537fd49b553528b65b8f5d6cec7fd232ba7d814579263eb45d04c42d7"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.286266 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.299687 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.799622588 +0000 UTC m=+157.683815304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.322576 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.324249 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.824233929 +0000 UTC m=+157.708426645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.340089 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" podStartSLOduration=129.340061036 podStartE2EDuration="2m9.340061036s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.339610143 +0000 UTC m=+157.223802859" watchObservedRunningTime="2026-02-02 10:41:17.340061036 +0000 UTC m=+157.224253752" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.406868 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" event={"ID":"9832aa65-d498-4a21-b53a-ebc591328a00","Type":"ContainerStarted","Data":"b85748eff3923d08bc6d620f725d7b018256a0e4610871950b9aeb66eccc2539"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.409385 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" podStartSLOduration=129.409349516 podStartE2EDuration="2m9.409349516s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.397149214 +0000 UTC m=+157.281341930" watchObservedRunningTime="2026-02-02 10:41:17.409349516 +0000 UTC m=+157.293542232" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.422404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" event={"ID":"04bfeb66-d53c-4263-a149-e7e1d705f9d1","Type":"ContainerStarted","Data":"d379e1009a1a0d9e7dc27efc54baf6b5fb55c2e2555ae9793f6b41017abec525"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.424312 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.425867 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:17.925810311 +0000 UTC m=+157.810003027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.430353 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" event={"ID":"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc","Type":"ContainerStarted","Data":"6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.436300 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.450140 4782 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l2hps container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.450207 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.489330 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" event={"ID":"4471bb99-24c2-45b0-bb05-3f3d59191e12","Type":"ContainerStarted","Data":"cedf0ef9e9da630d9d7462798a41206f29c965ecd9b398094d354ec99515d10c"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.490370 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.491790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" event={"ID":"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d","Type":"ContainerStarted","Data":"144bdd87fcd4e43e08dc8e8720f7a4a651248039101cecfd432b655bd220270b"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.492563 4782 patch_prober.go:28] interesting pod/console-operator-58897d9998-qsnhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.492604 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" podUID="4471bb99-24c2-45b0-bb05-3f3d59191e12" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.513302 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"f645a791a3450700734f03dc17d37ec76d3e1d1ce6dd2fc86ac5a97507c430f7"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.529764 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.531514 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.031487122 +0000 UTC m=+157.915680018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.542830 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" event={"ID":"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d","Type":"ContainerStarted","Data":"1ff197f1bc588e155c33611ca81f8fb60a2d7209994e44077453eba80258b1aa"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.624985 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podStartSLOduration=129.624961941 podStartE2EDuration="2m9.624961941s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.623006254 +0000 UTC m=+157.507198970" watchObservedRunningTime="2026-02-02 10:41:17.624961941 +0000 UTC m=+157.509154657" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.625109 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" podStartSLOduration=130.625104875 podStartE2EDuration="2m10.625104875s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.452447731 +0000 UTC m=+157.336640447" watchObservedRunningTime="2026-02-02 10:41:17.625104875 +0000 UTC m=+157.509297591" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.629472 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" event={"ID":"1136cbad-9e47-49e0-a890-83d86d325537","Type":"ContainerStarted","Data":"88dd5902d563a82f23232676ad168c9e7e041f718f29d548107f4e61045ad4cb"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.631566 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.632769 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.132737975 +0000 UTC m=+158.016930691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.652928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pb5d" event={"ID":"671fcd5f-c44a-46e7-840f-d204d2464822","Type":"ContainerStarted","Data":"eb8054c8ebd3591d23679ac11a66f5399c030827cda5bc451b67c3a140494277"} Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.654862 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.654906 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.733362 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.740208 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.240174387 +0000 UTC m=+158.124367313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.835891 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.836358 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.335972513 +0000 UTC m=+158.220165239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.836399 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.836815 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.336802387 +0000 UTC m=+158.220995103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.839698 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" podStartSLOduration=129.83968368 podStartE2EDuration="2m9.83968368s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.747505629 +0000 UTC m=+157.631698345" watchObservedRunningTime="2026-02-02 10:41:17.83968368 +0000 UTC m=+157.723876416" Feb 02 10:41:17 crc kubenswrapper[4782]: I0202 10:41:17.941561 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:17 crc kubenswrapper[4782]: E0202 10:41:17.942136 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.442088626 +0000 UTC m=+158.326281342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.043696 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.044441 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.544423501 +0000 UTC m=+158.428616207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.046954 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:18 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:18 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:18 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.047025 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.150337 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.150869 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.650840563 +0000 UTC m=+158.535033279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.254900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.256936 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.756910155 +0000 UTC m=+158.641102871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.360733 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.361114 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.861075382 +0000 UTC m=+158.745268098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.472788 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.473190 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:18.973175308 +0000 UTC m=+158.857368024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.574970 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.575412 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.075394079 +0000 UTC m=+158.959586795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.676680 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.676731 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" event={"ID":"1136cbad-9e47-49e0-a890-83d86d325537","Type":"ContainerStarted","Data":"5d93a4ee7ab87f66d43ca414cad70a3abab495e56d5fa4d07a3ec4fd51e19e67"} Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.677173 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.177148707 +0000 UTC m=+159.061341433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.689244 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-59ctg" event={"ID":"ed24c96e-c389-443d-bdcf-b6fd727d472e","Type":"ContainerStarted","Data":"16d1533fa9fd564637c00c7ac26601ec856b29a69837c170a48643b421b53856"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.704675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" event={"ID":"acfa5788-ab19-4e50-bc93-31b7a5069b32","Type":"ContainerStarted","Data":"72bcac9cb7b150a69f6ceca776a72ef29a03eba9f5c1f9ad15dc601084d035bd"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.704772 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.711962 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" event={"ID":"7391fe7f-58d0-4947-b2e3-32b1cd1cb01d","Type":"ContainerStarted","Data":"faa62695a6a822d2ecef0a8249556c0694c69b5ad78c8384c335b7a0aa887cd6"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.723256 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" event={"ID":"e457712f-8cc5-4167-b074-cd8713eb9989","Type":"ContainerStarted","Data":"93e1b5344dcec97dbfdf479458be0e7b8e37078141a051288144b56f50e25139"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.723299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" event={"ID":"e457712f-8cc5-4167-b074-cd8713eb9989","Type":"ContainerStarted","Data":"37c6be04ae0eb86f31e008f743a9c19454846b04d7d8c6f787d7766afb8fcaf4"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.724237 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.724939 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x2mbg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.724990 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podUID="e457712f-8cc5-4167-b074-cd8713eb9989" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.726961 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" event={"ID":"082079e0-8d5a-4d2e-959e-0366e4787bd5","Type":"ContainerStarted","Data":"fc20bf99c5542b1e870fd90d6d5b8c58c5907e341414a69e3a33457bb31007a9"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.727088 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5n294" podStartSLOduration=130.727071228 podStartE2EDuration="2m10.727071228s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:17.841288116 +0000 UTC m=+157.725480852" watchObservedRunningTime="2026-02-02 10:41:18.727071228 +0000 UTC m=+158.611263944" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.727592 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" podStartSLOduration=130.727587543 podStartE2EDuration="2m10.727587543s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.725447161 +0000 UTC m=+158.609639877" watchObservedRunningTime="2026-02-02 10:41:18.727587543 +0000 UTC m=+158.611780259" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.738369 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rqqwp" event={"ID":"e3487bc1-e5e6-4f19-9d79-86176f5b9689","Type":"ContainerStarted","Data":"dde926fdab276b28b2a5e4d83a8d09ceaf9ff32a232c715b88e7b0b563a33a9c"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.738467 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rqqwp" event={"ID":"e3487bc1-e5e6-4f19-9d79-86176f5b9689","Type":"ContainerStarted","Data":"47d434ef590522ad750aa45a3290d2712e9b65f0e14b67777d7efb0981de5a59"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.745277 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" event={"ID":"40828191-5926-42ba-b84d-5737181b97e5","Type":"ContainerStarted","Data":"0274879557cb2fa846e7d39a3d6e1c0b31c73237c40e22c24552cf5b1376b0fe"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.754624 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jvpsj" event={"ID":"d11d8c73-fe90-48c3-be77-b066aa57cacc","Type":"ContainerStarted","Data":"3aaa428559bc9e8e4b518d2006412cd18d56f880137224c819a072c94bda832c"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.761043 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podStartSLOduration=130.761023238 podStartE2EDuration="2m10.761023238s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.755689504 +0000 UTC m=+158.639882220" watchObservedRunningTime="2026-02-02 10:41:18.761023238 +0000 UTC m=+158.645215954" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.772220 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" event={"ID":"b925d6b9-8b5c-4407-bd7b-9ddcbc62d78d","Type":"ContainerStarted","Data":"30352607cc42d152f7e595932bca688b1d98d76ed3cb4bceea083527bca6d270"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.776347 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" event={"ID":"9ee676ac-a60f-4855-949f-d3210f9314f5","Type":"ContainerStarted","Data":"a413a2212255ee67ee1968cf2c2f15fda2e4bb4fb57b18592790dbb3eb7eb9b3"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.776413 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" event={"ID":"9ee676ac-a60f-4855-949f-d3210f9314f5","Type":"ContainerStarted","Data":"197b793a725593dac94b13df26077c78c845077c65c1cf972dd5fa7577219469"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.780289 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.782252 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.282117747 +0000 UTC m=+159.166310463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.784686 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wqm6f" event={"ID":"2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1","Type":"ContainerStarted","Data":"d8325544121c869765ab1664f09dc48dc5b720b7e1de88a2f461978b91312e60"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.796628 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ttkr" podStartSLOduration=130.796595485 podStartE2EDuration="2m10.796595485s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.793400593 +0000 UTC m=+158.677593319" watchObservedRunningTime="2026-02-02 10:41:18.796595485 +0000 UTC m=+158.680788201" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.815655 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" event={"ID":"4471bb99-24c2-45b0-bb05-3f3d59191e12","Type":"ContainerStarted","Data":"717ba9dced6c0626b628bed28b544692c3c8ebe7a4660947c967f142d44346f4"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.817951 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" event={"ID":"1180fe74-10a3-4aa0-b205-7f47597ef9b3","Type":"ContainerStarted","Data":"7a757a5e4e54824731e2cc739189983d506534b385165c14a4258138827b6176"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.818086 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" event={"ID":"1180fe74-10a3-4aa0-b205-7f47597ef9b3","Type":"ContainerStarted","Data":"999e687bc64e6cceb4540cd1247a0ae430a157441090e8bc6fc5467cf64e6ceb"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.819319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" event={"ID":"8224e4c0-380b-489a-98d8-ee1b15c1637a","Type":"ContainerStarted","Data":"c92e263b64370e3efc5e3743dc938a35c1906b7ddba58a86238a664180d22a79"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.821708 4782 patch_prober.go:28] interesting pod/console-operator-58897d9998-qsnhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.821781 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" podUID="4471bb99-24c2-45b0-bb05-3f3d59191e12" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.852448 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rqqwp" podStartSLOduration=8.852421747 podStartE2EDuration="8.852421747s" podCreationTimestamp="2026-02-02 10:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.82723533 +0000 UTC m=+158.711428056" watchObservedRunningTime="2026-02-02 10:41:18.852421747 +0000 UTC m=+158.736614463" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.887321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.900539 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.400523266 +0000 UTC m=+159.284716172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.918458 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" event={"ID":"3e5362ac-062f-4bf2-a0dc-e96b2750ab52","Type":"ContainerStarted","Data":"8bbfbbad540ae0b9faad8dbb2a178e82cb455cd8ae165b3b2177d148f02280e9"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.936341 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" event={"ID":"04bfeb66-d53c-4263-a149-e7e1d705f9d1","Type":"ContainerStarted","Data":"fa08662019279128f468ce78aa4b169bc4ad199a2b7aee35946f57f8e967b2e8"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.937864 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.939499 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6x4zp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.939546 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" podUID="04bfeb66-d53c-4263-a149-e7e1d705f9d1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.942394 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jvpsj" podStartSLOduration=8.942379934 podStartE2EDuration="8.942379934s" podCreationTimestamp="2026-02-02 10:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.899073554 +0000 UTC m=+158.783266280" watchObservedRunningTime="2026-02-02 10:41:18.942379934 +0000 UTC m=+158.826572650" Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.976726 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rx8sj" event={"ID":"d9fd7961-c147-4fad-b4b7-75f5567976f2","Type":"ContainerStarted","Data":"c3b7c73b300d3a3c282ba3dc07a834af4aaad64a851ecf6f9f47686d2eadb747"} Feb 02 10:41:18 crc kubenswrapper[4782]: I0202 10:41:18.992702 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:18 crc kubenswrapper[4782]: E0202 10:41:18.994245 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.494219391 +0000 UTC m=+159.378412107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.000030 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-84dzp" podStartSLOduration=131.000013288 podStartE2EDuration="2m11.000013288s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.997009721 +0000 UTC m=+158.881202447" watchObservedRunningTime="2026-02-02 10:41:19.000013288 +0000 UTC m=+158.884206014" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.000532 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" podStartSLOduration=131.000527103 podStartE2EDuration="2m11.000527103s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:18.944480075 +0000 UTC m=+158.828672791" watchObservedRunningTime="2026-02-02 10:41:19.000527103 +0000 UTC m=+158.884719819" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.007659 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" event={"ID":"4e9af173-4335-4ebd-9b11-dfb4180e968b","Type":"ContainerStarted","Data":"5f54dc86b98848ff22639b3fbcc26d55e544fe0cf2fe684543185333f500edfa"} Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.019059 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsb8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.019124 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.020102 4782 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l2hps container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.020179 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.046913 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:19 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:19 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:19 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.046970 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.060986 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6pzt4" podStartSLOduration=131.060968088 podStartE2EDuration="2m11.060968088s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:19.058758224 +0000 UTC m=+158.942950940" watchObservedRunningTime="2026-02-02 10:41:19.060968088 +0000 UTC m=+158.945160804" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.073255 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pvghb" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.097260 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.098215 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" podStartSLOduration=131.098194902 podStartE2EDuration="2m11.098194902s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:19.098021487 +0000 UTC m=+158.982214203" watchObservedRunningTime="2026-02-02 10:41:19.098194902 +0000 UTC m=+158.982387618" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.100289 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.600259352 +0000 UTC m=+159.484452238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.211410 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.211731 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.711710449 +0000 UTC m=+159.595903165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.212094 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.212463 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.712455091 +0000 UTC m=+159.596647807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.313542 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.313776 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.813740485 +0000 UTC m=+159.697933201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.313911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.314331 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.814314602 +0000 UTC m=+159.698507318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.414981 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.415303 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.915256305 +0000 UTC m=+159.799449021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.415365 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.415937 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:19.915921204 +0000 UTC m=+159.800114110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.516791 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.517060 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.017024883 +0000 UTC m=+159.901217609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.517481 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.517969 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.0179571 +0000 UTC m=+159.902149996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.619761 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.620015 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.119970245 +0000 UTC m=+160.004162961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.620075 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.620486 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.120478799 +0000 UTC m=+160.004671515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.703696 4782 csr.go:261] certificate signing request csr-pg99q is approved, waiting to be issued Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.714742 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.721454 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.721717 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.221669761 +0000 UTC m=+160.105862487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.721807 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.722177 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.222154985 +0000 UTC m=+160.106347871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.725984 4782 csr.go:257] certificate signing request csr-pg99q is issued Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.823882 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.825939 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.32590973 +0000 UTC m=+160.210102466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:19 crc kubenswrapper[4782]: I0202 10:41:19.927798 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:19 crc kubenswrapper[4782]: E0202 10:41:19.928302 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.428279925 +0000 UTC m=+160.312472821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.017836 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" event={"ID":"8224e4c0-380b-489a-98d8-ee1b15c1637a","Type":"ContainerStarted","Data":"78049b8c2c467db296fbf23b6b6764eb61edf61b04ad5643f450ed222e761cf9"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.019963 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rx8sj" event={"ID":"d9fd7961-c147-4fad-b4b7-75f5567976f2","Type":"ContainerStarted","Data":"7def0eee5994841bc5341fcd71852bcf40d32d7bcc0a8468a7f80ed43167a81f"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.020320 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.023147 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"f552f5c209be1b2c3ad1e343b59cb7168fc39643cd80e3cfeda1a23d3929d75d"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.026359 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" event={"ID":"1f4f42b8-506a-4922-b7c4-7f77afbb238c","Type":"ContainerStarted","Data":"091c99a165d85dd17c961c1351977ee8ed537641cc70661f69345a5c56b09859"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.026550 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" event={"ID":"1f4f42b8-506a-4922-b7c4-7f77afbb238c","Type":"ContainerStarted","Data":"e7de695cdb44af327ebdcca6a7412363211e305bdea091b28ca57124a2c2fa76"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.028731 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" event={"ID":"4e9af173-4335-4ebd-9b11-dfb4180e968b","Type":"ContainerStarted","Data":"1bf7f946ec28a00eaf539ed76a6e2fddc7e533db23eccf088377ec0830372b07"} Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.028938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029495 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x2mbg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029550 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podUID="e457712f-8cc5-4167-b074-cd8713eb9989" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029613 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6x4zp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029630 4782 patch_prober.go:28] interesting pod/console-operator-58897d9998-qsnhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029688 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" podUID="04bfeb66-d53c-4263-a149-e7e1d705f9d1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.029694 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" podUID="4471bb99-24c2-45b0-bb05-3f3d59191e12" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/readyz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.031395 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.531367822 +0000 UTC m=+160.415560538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.031813 4782 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7v92z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.031863 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" podUID="acfa5788-ab19-4e50-bc93-31b7a5069b32" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.033873 4782 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dsb8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.033922 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.062957 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qjf8d" podStartSLOduration=132.062925563 podStartE2EDuration="2m12.062925563s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.060962976 +0000 UTC m=+159.945155692" watchObservedRunningTime="2026-02-02 10:41:20.062925563 +0000 UTC m=+159.947118279" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.063794 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:20 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:20 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:20 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.064271 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.104970 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.131330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.141834 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.64181683 +0000 UTC m=+160.526009546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.151789 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bpzbh" podStartSLOduration=132.151761757 podStartE2EDuration="2m12.151761757s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.132045328 +0000 UTC m=+160.016238044" watchObservedRunningTime="2026-02-02 10:41:20.151761757 +0000 UTC m=+160.035954473" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.242543 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.242953 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.742934169 +0000 UTC m=+160.627126885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.344801 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.345827 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.845787239 +0000 UTC m=+160.729979955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.347972 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" podStartSLOduration=133.347959991 podStartE2EDuration="2m13.347959991s" podCreationTimestamp="2026-02-02 10:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.227064821 +0000 UTC m=+160.111257537" watchObservedRunningTime="2026-02-02 10:41:20.347959991 +0000 UTC m=+160.232152707" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.348135 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rx8sj" podStartSLOduration=10.348131466 podStartE2EDuration="10.348131466s" podCreationTimestamp="2026-02-02 10:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.345603963 +0000 UTC m=+160.229796689" watchObservedRunningTime="2026-02-02 10:41:20.348131466 +0000 UTC m=+160.232324182" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.448991 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.948942897 +0000 UTC m=+160.833135613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.448831 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.450719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.452757 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:20.952724436 +0000 UTC m=+160.836917152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.525924 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" podStartSLOduration=132.525891148 podStartE2EDuration="2m12.525891148s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.416302024 +0000 UTC m=+160.300494740" watchObservedRunningTime="2026-02-02 10:41:20.525891148 +0000 UTC m=+160.410083884" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.528215 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bhjwh" podStartSLOduration=132.528196495 podStartE2EDuration="2m12.528196495s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.520381839 +0000 UTC m=+160.404574565" watchObservedRunningTime="2026-02-02 10:41:20.528196495 +0000 UTC m=+160.412389211" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.551924 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.552160 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.052123285 +0000 UTC m=+160.936316001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.552845 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.553451 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.053426723 +0000 UTC m=+160.937619439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.654137 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.654412 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.154367167 +0000 UTC m=+161.038559893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.654788 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.655127 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.155110879 +0000 UTC m=+161.039303595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.727846 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 10:36:19 +0000 UTC, rotation deadline is 2026-12-09 07:15:07.159858635 +0000 UTC Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.727908 4782 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7436h33m46.431956005s for next certificate rotation Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.731449 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l59tj" podStartSLOduration=132.731419442 podStartE2EDuration="2m12.731419442s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:20.60077973 +0000 UTC m=+160.484972456" watchObservedRunningTime="2026-02-02 10:41:20.731419442 +0000 UTC m=+160.615612158" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.756392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.756846 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.256822415 +0000 UTC m=+161.141015131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.858336 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.858868 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.35884467 +0000 UTC m=+161.243037386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.885161 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.886694 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.951428 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.951877 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.956907 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.961089 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.961375 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.461331629 +0000 UTC m=+161.345524345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:20 crc kubenswrapper[4782]: I0202 10:41:20.961501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:20 crc kubenswrapper[4782]: E0202 10:41:20.961917 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.461906066 +0000 UTC m=+161.346098962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.057183 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:21 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:21 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:21 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.057259 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.062837 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.063325 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.063396 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.077092 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.563619532 +0000 UTC m=+161.447812248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.099149 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.168377 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.168564 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.168702 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.170518 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.670497388 +0000 UTC m=+161.554690104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.170601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.271582 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.272101 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.77207707 +0000 UTC m=+161.656269786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.272392 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.272794 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.772786161 +0000 UTC m=+161.656978867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.302563 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.373657 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.374015 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.873970372 +0000 UTC m=+161.758163108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.476332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.476785 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:21.976764989 +0000 UTC m=+161.860957705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.509691 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.577368 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.578275 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.077556429 +0000 UTC m=+161.961749155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.578744 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.579144 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.079127544 +0000 UTC m=+161.963320250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.680210 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.680381 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.180357677 +0000 UTC m=+162.064550393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.680472 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.680798 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.18079139 +0000 UTC m=+162.064984106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.781368 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.781849 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.281823466 +0000 UTC m=+162.166016182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.830322 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.830749 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.830345 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.830876 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.864608 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.864683 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.882855 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.883316 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.383299776 +0000 UTC m=+162.267492492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.923813 4782 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7v92z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.923934 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" podUID="acfa5788-ab19-4e50-bc93-31b7a5069b32" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.934092 4782 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7v92z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.934196 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" podUID="acfa5788-ab19-4e50-bc93-31b7a5069b32" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.935516 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.968944 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.970392 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.970632 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.977408 4782 patch_prober.go:28] interesting pod/console-f9d7485db-sf9m8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.977476 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.992440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.992596 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.49256529 +0000 UTC m=+162.376757996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.992939 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.993002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:21 crc kubenswrapper[4782]: I0202 10:41:21.993028 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:21 crc kubenswrapper[4782]: E0202 10:41:21.993396 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.493389004 +0000 UTC m=+162.377581720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.000800 4782 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z8gmg container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.28:8443/livez\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.000879 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" podUID="1f4f42b8-506a-4922-b7c4-7f77afbb238c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.28:8443/livez\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.038783 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.069276 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:22 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:22 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:22 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.069360 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.075966 4782 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6x4zp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.076038 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" podUID="04bfeb66-d53c-4263-a149-e7e1d705f9d1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.105934 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.107610 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.607579211 +0000 UTC m=+162.491771927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.208682 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.209045 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.709028709 +0000 UTC m=+162.593221425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.309247 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.309470 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.809434358 +0000 UTC m=+162.693627074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.309661 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.310184 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.81017707 +0000 UTC m=+162.694369786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.414316 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.414830 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.91479215 +0000 UTC m=+162.798984866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.414889 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.415331 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:22.915323545 +0000 UTC m=+162.799516261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.516413 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.516549 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.016523407 +0000 UTC m=+162.900716123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.516687 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.517041 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.017031451 +0000 UTC m=+162.901224167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.534335 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.544702 4782 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-fwkht container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]log ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]etcd ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]etcd-readiness ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 10:41:22 crc kubenswrapper[4782]: [-]informer-sync failed: reason withheld Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/max-in-flight-filter ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 02 10:41:22 crc kubenswrapper[4782]: [+]shutdown ok Feb 02 10:41:22 crc kubenswrapper[4782]: readyz check failed Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.544807 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" podUID="082079e0-8d5a-4d2e-959e-0366e4787bd5" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.617478 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.617720 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.117684557 +0000 UTC m=+163.001877273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.623474 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.725662 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.726000 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.225983564 +0000 UTC m=+163.110176280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.827560 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.857536 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.357502191 +0000 UTC m=+163.241694907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.946477 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:22 crc kubenswrapper[4782]: E0202 10:41:22.946958 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.446941992 +0000 UTC m=+163.331134708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.953258 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:41:22 crc kubenswrapper[4782]: I0202 10:41:22.953459 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.040278 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:23 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:23 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:23 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.040352 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.047488 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.048013 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.547988179 +0000 UTC m=+163.432180895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.097449 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6","Type":"ContainerStarted","Data":"2034a31847b2f294c83ea2d9717b4e214f2b19b44188b148a7af71f3915c9fe9"} Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.099937 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"6db49e8223f961f04f23da7614a5c5219befb4b782386050030a003079a4d672"} Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.149099 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.149728 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.649701155 +0000 UTC m=+163.533894051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.250554 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.250719 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.75067887 +0000 UTC m=+163.634871586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.251235 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.251591 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.751580786 +0000 UTC m=+163.635773502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.265924 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.267098 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.271254 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.284984 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.285739 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.307308 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.308107 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.308387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.308553 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.315992 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.321459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.323867 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.344433 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352518 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.352819 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.852779598 +0000 UTC m=+163.736972314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352912 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352933 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352949 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.352977 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.353047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.353072 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.353114 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.353135 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.353490 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.853480958 +0000 UTC m=+163.737673674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455409 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455765 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455832 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455863 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455940 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455960 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.455995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.456161 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.456302 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:23.956278086 +0000 UTC m=+163.840470802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.456588 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.457044 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.457294 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.457435 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.506920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") pod \"community-operators-lxwg2\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.509361 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") pod \"certified-operators-8vzzf\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.534714 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.536355 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.536890 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.558319 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.558886 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.058867018 +0000 UTC m=+163.943059734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.584461 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.599055 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.602275 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.622489 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.633059 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.661489 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.661802 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.661893 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.661941 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.670002 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.169967945 +0000 UTC m=+164.054160661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.765543 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.765619 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.765674 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.765696 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.766243 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.766581 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.266566304 +0000 UTC m=+164.150759020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.767020 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.768240 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.769392 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.859585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") pod \"certified-operators-8g5bv\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.871203 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.871616 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.871864 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.371805812 +0000 UTC m=+164.255998528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.872137 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.872219 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.884439 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.887984 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.955175 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.991188 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.991253 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.991338 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.991382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.992424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:23 crc kubenswrapper[4782]: E0202 10:41:23.992493 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.492471846 +0000 UTC m=+164.376664562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:23 crc kubenswrapper[4782]: I0202 10:41:23.992693 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.031659 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") pod \"community-operators-5852s\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.044568 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:24 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:24 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:24 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.044664 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.103410 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.105174 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.605140258 +0000 UTC m=+164.489332974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.105452 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7v92z" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.157920 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.170153 4782 generic.go:334] "Generic (PLEG): container finished" podID="310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" containerID="1f19df8bd992a46faa225a2bdde8f980f9614cec37080c584059715e758ebedc" exitCode=0 Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.170611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6","Type":"ContainerDied","Data":"1f19df8bd992a46faa225a2bdde8f980f9614cec37080c584059715e758ebedc"} Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.190273 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qsnhv" Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.207269 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.707252356 +0000 UTC m=+164.591445072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.206854 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.280526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"55d706b8c149a6aaf4f21f54720b6f815043c22d3407d81647de5953a7874b26"} Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.315337 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.316785 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.816760598 +0000 UTC m=+164.700953314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.419399 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.420083 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:24.92005516 +0000 UTC m=+164.804247876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.476133 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6x4zp" Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.522665 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.523116 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.023091214 +0000 UTC m=+164.907283930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.624032 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.625349 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.125329416 +0000 UTC m=+165.009522132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.725679 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.726201 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.226179207 +0000 UTC m=+165.110371923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.828345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.828891 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.328875352 +0000 UTC m=+165.213068068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:24 crc kubenswrapper[4782]: I0202 10:41:24.930707 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:24 crc kubenswrapper[4782]: E0202 10:41:24.931230 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.431202726 +0000 UTC m=+165.315395442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.032819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.033347 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.533324965 +0000 UTC m=+165.417517871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.137059 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.137465 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.63744187 +0000 UTC m=+165.521634586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.181120 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:25 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:25 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:25 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.181204 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.240719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.241235 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.741217876 +0000 UTC m=+165.625410592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.310447 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" event={"ID":"db132fa2-cd84-4b44-b523-48b1af9f6f73","Type":"ContainerStarted","Data":"444754b36df118935d60858b524788de7ef6fb8c250d8f4cb68416ab108df6c4"} Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.343209 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.343561 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.84354178 +0000 UTC m=+165.727734496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.381032 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.382396 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.439776 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.444628 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.444688 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.444758 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.444810 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.445242 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:25.945226686 +0000 UTC m=+165.829419572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.524501 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.552536 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.552813 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.552842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.552895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.552986 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.052961776 +0000 UTC m=+165.937154482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.553478 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.554785 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.558107 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.561200 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:41:25 crc kubenswrapper[4782]: W0202 10:41:25.584921 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10039944_73fc_417b_925f_48a2985c277d.slice/crio-1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5 WatchSource:0}: Error finding container 1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5: Status 404 returned error can't find the container with id 1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5 Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.621709 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-r7j2r" podStartSLOduration=15.62168579 podStartE2EDuration="15.62168579s" podCreationTimestamp="2026-02-02 10:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:25.552237375 +0000 UTC m=+165.436430091" watchObservedRunningTime="2026-02-02 10:41:25.62168579 +0000 UTC m=+165.505878516" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.657293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.657802 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.157785042 +0000 UTC m=+166.041977758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.692690 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.694295 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.752472 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") pod \"redhat-marketplace-8tk99\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.763137 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.763563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.763627 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.763708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.763867 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.263847844 +0000 UTC m=+166.148040560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.866406 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.873347 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.874395 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.874538 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.875115 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.375092936 +0000 UTC m=+166.259285652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.874088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.870492 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.902070 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.925266 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") pod \"redhat-marketplace-khjwl\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:25 crc kubenswrapper[4782]: I0202 10:41:25.981842 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:25 crc kubenswrapper[4782]: E0202 10:41:25.982204 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.482180178 +0000 UTC m=+166.366372894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.031158 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.084539 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.084994 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.584978605 +0000 UTC m=+166.469171321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.114910 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:26 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:26 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:26 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.114977 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.133975 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.185436 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.185952 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.685920369 +0000 UTC m=+166.570113085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.247987 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.248080 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.286704 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.287048 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.787035099 +0000 UTC m=+166.671227815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.295706 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.308984 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.310633 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.329184 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.385585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerStarted","Data":"6ef548a38f0be82eadd409d2f97034be6e36d97b107a1699fdec8cc892db1b86"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.387342 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.387595 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.387704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.387723 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.387818 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.887802958 +0000 UTC m=+166.771995674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.400883 4782 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.408172 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.485124 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerStarted","Data":"2157f695d84a6bf7a7c1d517b9438fd49370964e625a05cdc501c630222fe141"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.489930 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.490030 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.490062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.490115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.490610 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.491000 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:26.990984566 +0000 UTC m=+166.875177282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.491451 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.492995 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.496049 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb0fd85c-ce56-4874-989e-20a0c304efd1","Type":"ContainerStarted","Data":"73b78820ba7c680a8424285974274eccad1462f8f885155bd360dedc3c4644b7"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.514171 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerStarted","Data":"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.514216 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerStarted","Data":"1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5"} Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.535420 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.556804 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") pod \"redhat-operators-g65rt\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.591862 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.594071 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.09403439 +0000 UTC m=+166.978227126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.594526 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.596190 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.096171002 +0000 UTC m=+166.980363718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.690033 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.690348 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" containerName="pruner" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.690363 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" containerName="pruner" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.690471 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" containerName="pruner" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.691411 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695405 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695447 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") pod \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695480 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") pod \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\" (UID: \"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6\") " Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695627 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695672 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695763 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.695793 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" (UID: "310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.695982 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.195952723 +0000 UTC m=+167.080145619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.698418 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.740861 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.749455 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6" (UID: "310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798751 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798824 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798849 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798881 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798921 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.798932 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.799282 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.299263495 +0000 UTC m=+167.183456211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.800105 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.800317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.884452 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") pod \"redhat-operators-xmt8t\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.885151 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fwkht" Feb 02 10:41:26 crc kubenswrapper[4782]: I0202 10:41:26.900037 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:26 crc kubenswrapper[4782]: E0202 10:41:26.904865 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.404839643 +0000 UTC m=+167.289032359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.028201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:27 crc kubenswrapper[4782]: E0202 10:41:27.030894 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.530873262 +0000 UTC m=+167.415066168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jxz27" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.032777 4782 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z8gmg container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]log ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]etcd ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/max-in-flight-filter ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 10:41:27 crc kubenswrapper[4782]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 10:41:27 crc kubenswrapper[4782]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-startinformers ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 10:41:27 crc kubenswrapper[4782]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 10:41:27 crc kubenswrapper[4782]: livez check failed Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.032845 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" podUID="1f4f42b8-506a-4922-b7c4-7f77afbb238c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.046947 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:27 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:27 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:27 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.046994 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.078021 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.080998 4782 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T10:41:26.401272897Z","Handler":null,"Name":""} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.135726 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:27 crc kubenswrapper[4782]: E0202 10:41:27.136207 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 10:41:27.636181392 +0000 UTC m=+167.520374108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.177915 4782 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.177963 4782 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.238876 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.307824 4782 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.307946 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.397806 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.440692 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.592787 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.593520 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"310e3ec7-7f7e-4fc6-b71b-1bb431c81fc6","Type":"ContainerDied","Data":"2034a31847b2f294c83ea2d9717b4e214f2b19b44188b148a7af71f3915c9fe9"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.593548 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2034a31847b2f294c83ea2d9717b4e214f2b19b44188b148a7af71f3915c9fe9" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.611673 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerStarted","Data":"a77f63d55d27418e43d1dec8a78bc759af36972ea35d4cdd887b4e0dd5624442"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.637673 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jxz27\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.641894 4782 generic.go:334] "Generic (PLEG): container finished" podID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerID="e1f3c2a5262859f45791070cb15411b8e1b8e41e441cf2fae29b116544fe07c5" exitCode=0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.642538 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerDied","Data":"e1f3c2a5262859f45791070cb15411b8e1b8e41e441cf2fae29b116544fe07c5"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.653162 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.680603 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerStarted","Data":"568ce8fc0d55d9c475927a100a13079ea3c32843e1f085a43192f2b40f052173"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.708401 4782 generic.go:334] "Generic (PLEG): container finished" podID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerID="a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87" exitCode=0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.708500 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerDied","Data":"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.724549 4782 generic.go:334] "Generic (PLEG): container finished" podID="a893973e-e0b3-426e-8bf1-7902687b7036" containerID="c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c" exitCode=0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.724709 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerDied","Data":"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.724753 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerStarted","Data":"f8713e44a60ae45253bec2e5d10994fc19863aeccf7c6e956f5738780c8b26dd"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.726706 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb0fd85c-ce56-4874-989e-20a0c304efd1","Type":"ContainerStarted","Data":"be2ba6b14538fcbdaa668607f0c40b729fb85d67aa69815ab5d50dcd6b55a725"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.729988 4782 generic.go:334] "Generic (PLEG): container finished" podID="10039944-73fc-417b-925f-48a2985c277d" containerID="9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e" exitCode=0 Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.730025 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerDied","Data":"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e"} Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.758279 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:41:27 crc kubenswrapper[4782]: I0202 10:41:27.782937 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.042623 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:28 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:28 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:28 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.043069 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.051921 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.051894998 podStartE2EDuration="5.051894998s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:27.874310401 +0000 UTC m=+167.758503127" watchObservedRunningTime="2026-02-02 10:41:28.051894998 +0000 UTC m=+167.936087714" Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.054674 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.085352 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:41:28 crc kubenswrapper[4782]: W0202 10:41:28.155698 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9a718cd_1b6d_483f_b995_938331c7e00e.slice/crio-4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6 WatchSource:0}: Error finding container 4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6: Status 404 returned error can't find the container with id 4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.541881 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.740622 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" event={"ID":"4877e80d-a6fe-4503-a64c-398815efa1e0","Type":"ContainerStarted","Data":"a55c72e5f15ff42bfcfbbd5f83cbfe22e092ae45221bb6158bb15a9d235221ed"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.742853 4782 generic.go:334] "Generic (PLEG): container finished" podID="9832aa65-d498-4a21-b53a-ebc591328a00" containerID="b85748eff3923d08bc6d620f725d7b018256a0e4610871950b9aeb66eccc2539" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.742938 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" event={"ID":"9832aa65-d498-4a21-b53a-ebc591328a00","Type":"ContainerDied","Data":"b85748eff3923d08bc6d620f725d7b018256a0e4610871950b9aeb66eccc2539"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.750097 4782 generic.go:334] "Generic (PLEG): container finished" podID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerID="79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.750768 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerDied","Data":"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.750930 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerStarted","Data":"4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.754524 4782 generic.go:334] "Generic (PLEG): container finished" podID="9beb5599-8c2d-4493-9561-cc2781d32052" containerID="26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.754698 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerDied","Data":"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.778249 4782 generic.go:334] "Generic (PLEG): container finished" podID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerID="e85622bd784d09d56836a615239db13244c2d5b26841db53fd14e1ec1665771e" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.778748 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerDied","Data":"e85622bd784d09d56836a615239db13244c2d5b26841db53fd14e1ec1665771e"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.778817 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerStarted","Data":"c20f4c43562c9a26701d05b9c48459ab9215c0f89e3d7636a6006f20e7c4c9aa"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.782147 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb0fd85c-ce56-4874-989e-20a0c304efd1" containerID="be2ba6b14538fcbdaa668607f0c40b729fb85d67aa69815ab5d50dcd6b55a725" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.782274 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb0fd85c-ce56-4874-989e-20a0c304efd1","Type":"ContainerDied","Data":"be2ba6b14538fcbdaa668607f0c40b729fb85d67aa69815ab5d50dcd6b55a725"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.790391 4782 generic.go:334] "Generic (PLEG): container finished" podID="99330299-8910-4c41-b704-120a10eb799b" containerID="4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4" exitCode=0 Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.790423 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerDied","Data":"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4"} Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.821459 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rx8sj" Feb 02 10:41:28 crc kubenswrapper[4782]: I0202 10:41:28.886400 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 10:41:29 crc kubenswrapper[4782]: I0202 10:41:29.047165 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:29 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:29 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:29 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:29 crc kubenswrapper[4782]: I0202 10:41:29.047286 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:29 crc kubenswrapper[4782]: I0202 10:41:29.984327 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" event={"ID":"4877e80d-a6fe-4503-a64c-398815efa1e0","Type":"ContainerStarted","Data":"9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642"} Feb 02 10:41:29 crc kubenswrapper[4782]: I0202 10:41:29.985297 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.018916 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" podStartSLOduration=142.018882354 podStartE2EDuration="2m22.018882354s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:30.010886433 +0000 UTC m=+169.895079149" watchObservedRunningTime="2026-02-02 10:41:30.018882354 +0000 UTC m=+169.903075070" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.042747 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:30 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:30 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:30 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.043590 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.524900 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.547988 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.578762 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.618338 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e23db96-3af7-4c29-b00f-5920a9431f01-metrics-certs\") pod \"network-metrics-daemon-tv4xc\" (UID: \"4e23db96-3af7-4c29-b00f-5920a9431f01\") " pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.646028 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tv4xc" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682303 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") pod \"9832aa65-d498-4a21-b53a-ebc591328a00\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") pod \"9832aa65-d498-4a21-b53a-ebc591328a00\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") pod \"eb0fd85c-ce56-4874-989e-20a0c304efd1\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682486 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") pod \"eb0fd85c-ce56-4874-989e-20a0c304efd1\" (UID: \"eb0fd85c-ce56-4874-989e-20a0c304efd1\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.682549 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") pod \"9832aa65-d498-4a21-b53a-ebc591328a00\" (UID: \"9832aa65-d498-4a21-b53a-ebc591328a00\") " Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.684761 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eb0fd85c-ce56-4874-989e-20a0c304efd1" (UID: "eb0fd85c-ce56-4874-989e-20a0c304efd1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.688934 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume" (OuterVolumeSpecName: "config-volume") pod "9832aa65-d498-4a21-b53a-ebc591328a00" (UID: "9832aa65-d498-4a21-b53a-ebc591328a00"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.691999 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9832aa65-d498-4a21-b53a-ebc591328a00-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.692040 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb0fd85c-ce56-4874-989e-20a0c304efd1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.705436 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp" (OuterVolumeSpecName: "kube-api-access-kzgdp") pod "9832aa65-d498-4a21-b53a-ebc591328a00" (UID: "9832aa65-d498-4a21-b53a-ebc591328a00"). InnerVolumeSpecName "kube-api-access-kzgdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.706100 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9832aa65-d498-4a21-b53a-ebc591328a00" (UID: "9832aa65-d498-4a21-b53a-ebc591328a00"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.706249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eb0fd85c-ce56-4874-989e-20a0c304efd1" (UID: "eb0fd85c-ce56-4874-989e-20a0c304efd1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.795255 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzgdp\" (UniqueName: \"kubernetes.io/projected/9832aa65-d498-4a21-b53a-ebc591328a00-kube-api-access-kzgdp\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.795295 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb0fd85c-ce56-4874-989e-20a0c304efd1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:30 crc kubenswrapper[4782]: I0202 10:41:30.795306 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9832aa65-d498-4a21-b53a-ebc591328a00-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.010508 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.011028 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"eb0fd85c-ce56-4874-989e-20a0c304efd1","Type":"ContainerDied","Data":"73b78820ba7c680a8424285974274eccad1462f8f885155bd360dedc3c4644b7"} Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.011068 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73b78820ba7c680a8424285974274eccad1462f8f885155bd360dedc3c4644b7" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.025185 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.025676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r" event={"ID":"9832aa65-d498-4a21-b53a-ebc591328a00","Type":"ContainerDied","Data":"366cbdbf60f3a0bf6cbbfb63bc5e1d0a80ef38264552b527daf3339d1fbe1798"} Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.025714 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="366cbdbf60f3a0bf6cbbfb63bc5e1d0a80ef38264552b527daf3339d1fbe1798" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.040322 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:31 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:31 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:31 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.040440 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.320816 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tv4xc"] Feb 02 10:41:31 crc kubenswrapper[4782]: W0202 10:41:31.370122 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e23db96_3af7_4c29_b00f_5920a9431f01.slice/crio-0f9ec78ec7544a3c8adf48740082794167923bddb4d6e61c462ee066c8eef472 WatchSource:0}: Error finding container 0f9ec78ec7544a3c8adf48740082794167923bddb4d6e61c462ee066c8eef472: Status 404 returned error can't find the container with id 0f9ec78ec7544a3c8adf48740082794167923bddb4d6e61c462ee066c8eef472 Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.830718 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.830826 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.830948 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.831034 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.970571 4782 patch_prober.go:28] interesting pod/console-f9d7485db-sf9m8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 02 10:41:31 crc kubenswrapper[4782]: I0202 10:41:31.970788 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.007954 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.015148 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z8gmg" Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.042230 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:32 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:32 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:32 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.042307 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:32 crc kubenswrapper[4782]: I0202 10:41:32.116678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" event={"ID":"4e23db96-3af7-4c29-b00f-5920a9431f01","Type":"ContainerStarted","Data":"0f9ec78ec7544a3c8adf48740082794167923bddb4d6e61c462ee066c8eef472"} Feb 02 10:41:33 crc kubenswrapper[4782]: I0202 10:41:33.042680 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:33 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:33 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:33 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:33 crc kubenswrapper[4782]: I0202 10:41:33.043049 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:33 crc kubenswrapper[4782]: I0202 10:41:33.191071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" event={"ID":"4e23db96-3af7-4c29-b00f-5920a9431f01","Type":"ContainerStarted","Data":"500187f503dbccc37b0645a02fa6097bb379cc819c65ca3847c0b5dd1c498f5d"} Feb 02 10:41:34 crc kubenswrapper[4782]: I0202 10:41:34.039516 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:34 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:34 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:34 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:34 crc kubenswrapper[4782]: I0202 10:41:34.039698 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:34 crc kubenswrapper[4782]: I0202 10:41:34.256089 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tv4xc" event={"ID":"4e23db96-3af7-4c29-b00f-5920a9431f01","Type":"ContainerStarted","Data":"95af337b0c1919520f82974ea1565d0b6269697b6423ebe2aabf4b5dbba97ff3"} Feb 02 10:41:34 crc kubenswrapper[4782]: I0202 10:41:34.283924 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tv4xc" podStartSLOduration=146.283885501 podStartE2EDuration="2m26.283885501s" podCreationTimestamp="2026-02-02 10:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:41:34.281229595 +0000 UTC m=+174.165422321" watchObservedRunningTime="2026-02-02 10:41:34.283885501 +0000 UTC m=+174.168078217" Feb 02 10:41:35 crc kubenswrapper[4782]: I0202 10:41:35.042293 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:35 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:35 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:35 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:35 crc kubenswrapper[4782]: I0202 10:41:35.042627 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:36 crc kubenswrapper[4782]: I0202 10:41:36.041833 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:36 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:36 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:36 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:36 crc kubenswrapper[4782]: I0202 10:41:36.041897 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:37 crc kubenswrapper[4782]: I0202 10:41:37.040022 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:37 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:37 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:37 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:37 crc kubenswrapper[4782]: I0202 10:41:37.040081 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.039851 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:38 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:38 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:38 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.040247 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.600812 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.601008 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" containerID="cri-o://6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01" gracePeriod=30 Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.646028 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:41:38 crc kubenswrapper[4782]: I0202 10:41:38.646240 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" containerID="cri-o://43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b" gracePeriod=30 Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.043153 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:39 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:39 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:39 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.043235 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.378258 4782 generic.go:334] "Generic (PLEG): container finished" podID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerID="6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01" exitCode=0 Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.378340 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" event={"ID":"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc","Type":"ContainerDied","Data":"6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01"} Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.380549 4782 generic.go:334] "Generic (PLEG): container finished" podID="59a1b37a-9035-459b-a485-280325d33264" containerID="43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b" exitCode=0 Feb 02 10:41:39 crc kubenswrapper[4782]: I0202 10:41:39.380583 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" event={"ID":"59a1b37a-9035-459b-a485-280325d33264","Type":"ContainerDied","Data":"43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b"} Feb 02 10:41:40 crc kubenswrapper[4782]: I0202 10:41:40.039701 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:40 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:40 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:40 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:40 crc kubenswrapper[4782]: I0202 10:41:40.040116 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.051467 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:41 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:41 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:41 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.051946 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830453 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830504 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830509 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830567 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.830556 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.831015 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.831046 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.831339 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d"} pod="openshift-console/downloads-7954f5f757-4b45h" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.831509 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" containerID="cri-o://10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d" gracePeriod=2 Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.923421 4782 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-96t4g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.923489 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.971826 4782 patch_prober.go:28] interesting pod/console-f9d7485db-sf9m8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 02 10:41:41 crc kubenswrapper[4782]: I0202 10:41:41.971888 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 02 10:41:42 crc kubenswrapper[4782]: I0202 10:41:42.039138 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:42 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:42 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:42 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:42 crc kubenswrapper[4782]: I0202 10:41:42.039197 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:42 crc kubenswrapper[4782]: I0202 10:41:42.345103 4782 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l2hps container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 10:41:42 crc kubenswrapper[4782]: I0202 10:41:42.345174 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 10:41:43 crc kubenswrapper[4782]: I0202 10:41:43.040215 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:43 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:43 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:43 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:43 crc kubenswrapper[4782]: I0202 10:41:43.040311 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:43 crc kubenswrapper[4782]: I0202 10:41:43.427689 4782 generic.go:334] "Generic (PLEG): container finished" podID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerID="10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d" exitCode=0 Feb 02 10:41:43 crc kubenswrapper[4782]: I0202 10:41:43.427736 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4b45h" event={"ID":"e74c7e17-c70b-4637-ad47-58e1e192c52e","Type":"ContainerDied","Data":"10f247f0ec7d89e86c0b592dc814ce67e3e07cafde8483a429f5e7c5f241e65d"} Feb 02 10:41:44 crc kubenswrapper[4782]: I0202 10:41:44.040796 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:44 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:44 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:44 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:44 crc kubenswrapper[4782]: I0202 10:41:44.041555 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:45 crc kubenswrapper[4782]: I0202 10:41:45.039169 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:45 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:45 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:45 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:45 crc kubenswrapper[4782]: I0202 10:41:45.039265 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:46 crc kubenswrapper[4782]: I0202 10:41:46.040145 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:46 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:46 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:46 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:46 crc kubenswrapper[4782]: I0202 10:41:46.040213 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:47 crc kubenswrapper[4782]: I0202 10:41:47.039223 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:47 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:47 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:47 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:47 crc kubenswrapper[4782]: I0202 10:41:47.039285 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:47 crc kubenswrapper[4782]: I0202 10:41:47.794889 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:41:48 crc kubenswrapper[4782]: I0202 10:41:48.039225 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:48 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:48 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:48 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:48 crc kubenswrapper[4782]: I0202 10:41:48.039299 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:49 crc kubenswrapper[4782]: I0202 10:41:49.039078 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:49 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:49 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:49 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:49 crc kubenswrapper[4782]: I0202 10:41:49.039135 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:49 crc kubenswrapper[4782]: I0202 10:41:49.125894 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.040184 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:50 crc kubenswrapper[4782]: [-]has-synced failed: reason withheld Feb 02 10:41:50 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:50 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.040272 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.180564 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218271 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:41:50 crc kubenswrapper[4782]: E0202 10:41:50.218756 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0fd85c-ce56-4874-989e-20a0c304efd1" containerName="pruner" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218784 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0fd85c-ce56-4874-989e-20a0c304efd1" containerName="pruner" Feb 02 10:41:50 crc kubenswrapper[4782]: E0202 10:41:50.218798 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218806 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" Feb 02 10:41:50 crc kubenswrapper[4782]: E0202 10:41:50.218820 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9832aa65-d498-4a21-b53a-ebc591328a00" containerName="collect-profiles" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218830 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9832aa65-d498-4a21-b53a-ebc591328a00" containerName="collect-profiles" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218941 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0fd85c-ce56-4874-989e-20a0c304efd1" containerName="pruner" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218956 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9832aa65-d498-4a21-b53a-ebc591328a00" containerName="collect-profiles" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.218963 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" containerName="controller-manager" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.220012 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.231422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320396 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320457 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320584 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320616 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320698 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") pod \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\" (UID: \"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc\") " Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.320907 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321001 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321039 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321074 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321125 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321858 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config" (OuterVolumeSpecName: "config") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.321872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca" (OuterVolumeSpecName: "client-ca") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.322122 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.333964 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf" (OuterVolumeSpecName: "kube-api-access-7dqmf") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "kube-api-access-7dqmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.339720 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" (UID: "d6b03c59-eb07-4d99-beb5-04e1eb19c7bc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.421896 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.421954 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.421990 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422040 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422108 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422118 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422131 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422141 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.422150 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dqmf\" (UniqueName: \"kubernetes.io/projected/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc-kube-api-access-7dqmf\") on node \"crc\" DevicePath \"\"" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.423314 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.423317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.425600 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.426892 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.441606 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") pod \"controller-manager-cbd88d5cd-55tml\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.491093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" event={"ID":"d6b03c59-eb07-4d99-beb5-04e1eb19c7bc","Type":"ContainerDied","Data":"e1d3c6b879e919d9a8eeb6fc928bb73b1f6f10789e409d2d56a047e2a54eac9e"} Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.491180 4782 scope.go:117] "RemoveContainer" containerID="6845853f244b2c75c99b177a0d1190d48806df172d59ab6bff1e9c7722883a01" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.491334 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l2hps" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.540160 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.544225 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.550350 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l2hps"] Feb 02 10:41:50 crc kubenswrapper[4782]: I0202 10:41:50.828272 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b03c59-eb07-4d99-beb5-04e1eb19c7bc" path="/var/lib/kubelet/pods/d6b03c59-eb07-4d99-beb5-04e1eb19c7bc/volumes" Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.039876 4782 patch_prober.go:28] interesting pod/router-default-5444994796-29qjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 10:41:51 crc kubenswrapper[4782]: [+]has-synced ok Feb 02 10:41:51 crc kubenswrapper[4782]: [+]process-running ok Feb 02 10:41:51 crc kubenswrapper[4782]: healthz check failed Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.040018 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29qjf" podUID="fc962b97-f5d3-4673-9a39-8fbf6bc2424f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.832417 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.833102 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.971454 4782 patch_prober.go:28] interesting pod/console-f9d7485db-sf9m8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 02 10:41:51 crc kubenswrapper[4782]: I0202 10:41:51.971504 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.040211 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.043009 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-29qjf" Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.923281 4782 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-96t4g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.923382 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.951711 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:41:52 crc kubenswrapper[4782]: I0202 10:41:52.951827 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:41:53 crc kubenswrapper[4782]: I0202 10:41:53.962727 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f8xts" Feb 02 10:41:58 crc kubenswrapper[4782]: I0202 10:41:58.610400 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.968734 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.970395 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.975124 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.975187 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.975343 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.976784 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 10:42:00 crc kubenswrapper[4782]: I0202 10:42:00.979033 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.075817 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.075921 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.075938 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.100464 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.315433 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.530375 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.570742 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" event={"ID":"59a1b37a-9035-459b-a485-280325d33264","Type":"ContainerDied","Data":"1bed6a14af1c27e28bfaf20957b4ab6debdecb60fbd87716abbb4a3205ddb87a"} Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.570819 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.580649 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:01 crc kubenswrapper[4782]: E0202 10:42:01.580935 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.580952 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581145 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a1b37a-9035-459b-a485-280325d33264" containerName="route-controller-manager" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581619 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") pod \"59a1b37a-9035-459b-a485-280325d33264\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581691 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") pod \"59a1b37a-9035-459b-a485-280325d33264\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581721 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581810 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") pod \"59a1b37a-9035-459b-a485-280325d33264\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581835 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.581835 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") pod \"59a1b37a-9035-459b-a485-280325d33264\" (UID: \"59a1b37a-9035-459b-a485-280325d33264\") " Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.582420 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca" (OuterVolumeSpecName: "client-ca") pod "59a1b37a-9035-459b-a485-280325d33264" (UID: "59a1b37a-9035-459b-a485-280325d33264"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.582838 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.583672 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config" (OuterVolumeSpecName: "config") pod "59a1b37a-9035-459b-a485-280325d33264" (UID: "59a1b37a-9035-459b-a485-280325d33264"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.587843 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5" (OuterVolumeSpecName: "kube-api-access-p9tz5") pod "59a1b37a-9035-459b-a485-280325d33264" (UID: "59a1b37a-9035-459b-a485-280325d33264"). InnerVolumeSpecName "kube-api-access-p9tz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.594960 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59a1b37a-9035-459b-a485-280325d33264" (UID: "59a1b37a-9035-459b-a485-280325d33264"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.683932 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.683994 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684066 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684288 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684458 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a1b37a-9035-459b-a485-280325d33264-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684478 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9tz5\" (UniqueName: \"kubernetes.io/projected/59a1b37a-9035-459b-a485-280325d33264-kube-api-access-p9tz5\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.684494 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a1b37a-9035-459b-a485-280325d33264-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.785911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.786001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.786043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.786098 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.787819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.789399 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.792376 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.802565 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") pod \"route-controller-manager-5d55bfd8b6-6x9kp\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.831470 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.831582 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.903542 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.909693 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96t4g"] Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.937912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.975067 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:42:01 crc kubenswrapper[4782]: I0202 10:42:01.979570 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:42:02 crc kubenswrapper[4782]: I0202 10:42:02.828976 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a1b37a-9035-459b-a485-280325d33264" path="/var/lib/kubelet/pods/59a1b37a-9035-459b-a485-280325d33264/volumes" Feb 02 10:42:04 crc kubenswrapper[4782]: I0202 10:42:04.974500 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:42:04 crc kubenswrapper[4782]: I0202 10:42:04.975803 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:04 crc kubenswrapper[4782]: I0202 10:42:04.979039 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.034552 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.034828 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.034906 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.135896 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.136049 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.136114 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.136275 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.136383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.157854 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") pod \"installer-9-crc\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:05 crc kubenswrapper[4782]: I0202 10:42:05.316817 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.034435 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.034993 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h72b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xmt8t_openshift-marketplace(213698f8-d1b6-489f-8fc4-a69583d4fc2e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.037931 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xmt8t" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.237006 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.237203 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mqs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-g65rt_openshift-marketplace(d9a718cd-1b6d-483f-b995-938331c7e00e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:09 crc kubenswrapper[4782]: E0202 10:42:09.238566 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-g65rt" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" Feb 02 10:42:10 crc kubenswrapper[4782]: E0202 10:42:10.590690 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xmt8t" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" Feb 02 10:42:10 crc kubenswrapper[4782]: E0202 10:42:10.591714 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-g65rt" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.093984 4782 scope.go:117] "RemoveContainer" containerID="43da730602ca37219a75d1347b35a8488feb8647bfa755b3e8e2deac39ad1b1b" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.202277 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.202419 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlmlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8tk99_openshift-marketplace(9beb5599-8c2d-4493-9561-cc2781d32052): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.203827 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8tk99" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.505740 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.506330 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb5v4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8g5bv_openshift-marketplace(a893973e-e0b3-426e-8bf1-7902687b7036): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.508657 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8g5bv" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.628959 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4b45h" event={"ID":"e74c7e17-c70b-4637-ad47-58e1e192c52e","Type":"ContainerStarted","Data":"3d79dc7da7d2f8fe083b258d3fc741f3697071dd145f2ddcb0763fccf6144932"} Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.631790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerStarted","Data":"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd"} Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.639099 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8tk99" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.641486 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8g5bv" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.669905 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.670110 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r7ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-khjwl_openshift-marketplace(99330299-8910-4c41-b704-120a10eb799b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 10:42:11 crc kubenswrapper[4782]: E0202 10:42:11.673355 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-khjwl" podUID="99330299-8910-4c41-b704-120a10eb799b" Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.746229 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.818273 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.830748 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.830827 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.863540 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:42:11 crc kubenswrapper[4782]: I0202 10:42:11.877367 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.649992 4782 generic.go:334] "Generic (PLEG): container finished" podID="10039944-73fc-417b-925f-48a2985c277d" containerID="c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7" exitCode=0 Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.651808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerDied","Data":"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.655361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f4da25e-551a-4f31-9ee0-fb20b4589dfd","Type":"ContainerStarted","Data":"48f5b10da2bba632a14a6ad3feb2a4777b1c0f0ff7a5ac4b7850c09cc0320f84"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.671658 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerStarted","Data":"2276d1082587cac8d61118d06023cf6c740850dc2c5e7490e914a2f87e0a7eb9"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.686628 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf2939f4-fa35-4f01-a896-2ddc746ac111","Type":"ContainerStarted","Data":"acb92178b080f16f9482f40e0b16c2c17b6094a867d22c2de5c7014e8aa3b4cd"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.691243 4782 generic.go:334] "Generic (PLEG): container finished" podID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerID="00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd" exitCode=0 Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.691429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerDied","Data":"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.712543 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" event={"ID":"5d56934a-19d0-4c31-a6df-afcabaa1ed24","Type":"ContainerStarted","Data":"57759453f91f6f1ae38bb4987e54806618ccbdad87e7c2e009c76f00cce3bbb3"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.716480 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" event={"ID":"2227870a-e9fb-429e-a495-cfa17761d275","Type":"ContainerStarted","Data":"eb8b4451b7251617cfeca26bf86a321d4715359dfd467ddc355a4e63b2aa0184"} Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.717215 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.717343 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:12 crc kubenswrapper[4782]: I0202 10:42:12.717425 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:12 crc kubenswrapper[4782]: E0202 10:42:12.721584 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-khjwl" podUID="99330299-8910-4c41-b704-120a10eb799b" Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.833693 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" event={"ID":"2227870a-e9fb-429e-a495-cfa17761d275","Type":"ContainerStarted","Data":"daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e"} Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.835812 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f4da25e-551a-4f31-9ee0-fb20b4589dfd","Type":"ContainerStarted","Data":"b1b049c4e6c69853b509050eecbd80ccb18b26e4d2dbfe0e74f5c388c9a1cb17"} Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.837234 4782 generic.go:334] "Generic (PLEG): container finished" podID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerID="2276d1082587cac8d61118d06023cf6c740850dc2c5e7490e914a2f87e0a7eb9" exitCode=0 Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.837276 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerDied","Data":"2276d1082587cac8d61118d06023cf6c740850dc2c5e7490e914a2f87e0a7eb9"} Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.843503 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf2939f4-fa35-4f01-a896-2ddc746ac111","Type":"ContainerStarted","Data":"33a3c62f71f1956073f3a08b721e64058cac19abb9f4f54ee5048a1701d7cade"} Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.844674 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:13 crc kubenswrapper[4782]: I0202 10:42:13.844764 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.446665 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.851522 4782 generic.go:334] "Generic (PLEG): container finished" podID="6f4da25e-551a-4f31-9ee0-fb20b4589dfd" containerID="b1b049c4e6c69853b509050eecbd80ccb18b26e4d2dbfe0e74f5c388c9a1cb17" exitCode=0 Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.851595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f4da25e-551a-4f31-9ee0-fb20b4589dfd","Type":"ContainerDied","Data":"b1b049c4e6c69853b509050eecbd80ccb18b26e4d2dbfe0e74f5c388c9a1cb17"} Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.855404 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerName="controller-manager" containerID="cri-o://1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47" gracePeriod=30 Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.855887 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" event={"ID":"5d56934a-19d0-4c31-a6df-afcabaa1ed24","Type":"ContainerStarted","Data":"1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47"} Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.855939 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.856578 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.861914 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.862216 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.927113 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" podStartSLOduration=16.927088592 podStartE2EDuration="16.927088592s" podCreationTimestamp="2026-02-02 10:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:14.900911475 +0000 UTC m=+214.785104211" watchObservedRunningTime="2026-02-02 10:42:14.927088592 +0000 UTC m=+214.811281308" Feb 02 10:42:14 crc kubenswrapper[4782]: I0202 10:42:14.927250 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.927243806 podStartE2EDuration="10.927243806s" podCreationTimestamp="2026-02-02 10:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:14.923603091 +0000 UTC m=+214.807795807" watchObservedRunningTime="2026-02-02 10:42:14.927243806 +0000 UTC m=+214.811436542" Feb 02 10:42:15 crc kubenswrapper[4782]: I0202 10:42:15.862320 4782 generic.go:334] "Generic (PLEG): container finished" podID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerID="1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47" exitCode=0 Feb 02 10:42:15 crc kubenswrapper[4782]: I0202 10:42:15.862412 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" event={"ID":"5d56934a-19d0-4c31-a6df-afcabaa1ed24","Type":"ContainerDied","Data":"1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47"} Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.097439 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.124094 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" podStartSLOduration=38.124068647 podStartE2EDuration="38.124068647s" podCreationTimestamp="2026-02-02 10:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:14.972986767 +0000 UTC m=+214.857179483" watchObservedRunningTime="2026-02-02 10:42:16.124068647 +0000 UTC m=+216.008261363" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.216874 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") pod \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.216996 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") pod \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\" (UID: \"6f4da25e-551a-4f31-9ee0-fb20b4589dfd\") " Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.217300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6f4da25e-551a-4f31-9ee0-fb20b4589dfd" (UID: "6f4da25e-551a-4f31-9ee0-fb20b4589dfd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.222560 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6f4da25e-551a-4f31-9ee0-fb20b4589dfd" (UID: "6f4da25e-551a-4f31-9ee0-fb20b4589dfd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.318364 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.318682 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f4da25e-551a-4f31-9ee0-fb20b4589dfd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.869249 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f4da25e-551a-4f31-9ee0-fb20b4589dfd","Type":"ContainerDied","Data":"48f5b10da2bba632a14a6ad3feb2a4777b1c0f0ff7a5ac4b7850c09cc0320f84"} Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.869289 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f5b10da2bba632a14a6ad3feb2a4777b1c0f0ff7a5ac4b7850c09cc0320f84" Feb 02 10:42:16 crc kubenswrapper[4782]: I0202 10:42:16.869449 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.328628 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.429793 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.429892 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.430008 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.430051 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.430083 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") pod \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\" (UID: \"5d56934a-19d0-4c31-a6df-afcabaa1ed24\") " Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.431018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.431042 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config" (OuterVolumeSpecName: "config") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.431125 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.436352 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.442988 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr" (OuterVolumeSpecName: "kube-api-access-g8gfr") pod "5d56934a-19d0-4c31-a6df-afcabaa1ed24" (UID: "5d56934a-19d0-4c31-a6df-afcabaa1ed24"). InnerVolumeSpecName "kube-api-access-g8gfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531454 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531495 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531506 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d56934a-19d0-4c31-a6df-afcabaa1ed24-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531515 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d56934a-19d0-4c31-a6df-afcabaa1ed24-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.531525 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8gfr\" (UniqueName: \"kubernetes.io/projected/5d56934a-19d0-4c31-a6df-afcabaa1ed24-kube-api-access-g8gfr\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.875566 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" event={"ID":"5d56934a-19d0-4c31-a6df-afcabaa1ed24","Type":"ContainerDied","Data":"57759453f91f6f1ae38bb4987e54806618ccbdad87e7c2e009c76f00cce3bbb3"} Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.875924 4782 scope.go:117] "RemoveContainer" containerID="1df2e027b51a908c06570aa226b9bc2e1d2b54ac7fe19b8a5f926341bb11cc47" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.876045 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cbd88d5cd-55tml" Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.913602 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:42:17 crc kubenswrapper[4782]: I0202 10:42:17.916974 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cbd88d5cd-55tml"] Feb 02 10:42:18 crc kubenswrapper[4782]: I0202 10:42:18.827721 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" path="/var/lib/kubelet/pods/5d56934a-19d0-4c31-a6df-afcabaa1ed24/volumes" Feb 02 10:42:19 crc kubenswrapper[4782]: I0202 10:42:19.892386 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerStarted","Data":"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4"} Feb 02 10:42:19 crc kubenswrapper[4782]: I0202 10:42:19.916817 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxwg2" podStartSLOduration=5.5414169829999995 podStartE2EDuration="56.916799593s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="2026-02-02 10:41:26.534920654 +0000 UTC m=+166.419113370" lastFinishedPulling="2026-02-02 10:42:17.910303274 +0000 UTC m=+217.794495980" observedRunningTime="2026-02-02 10:42:19.91530865 +0000 UTC m=+219.799501366" watchObservedRunningTime="2026-02-02 10:42:19.916799593 +0000 UTC m=+219.800992309" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.621317 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:20 crc kubenswrapper[4782]: E0202 10:42:20.624876 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4da25e-551a-4f31-9ee0-fb20b4589dfd" containerName="pruner" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.625032 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4da25e-551a-4f31-9ee0-fb20b4589dfd" containerName="pruner" Feb 02 10:42:20 crc kubenswrapper[4782]: E0202 10:42:20.625144 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerName="controller-manager" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.625229 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerName="controller-manager" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.625476 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4da25e-551a-4f31-9ee0-fb20b4589dfd" containerName="pruner" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.625566 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d56934a-19d0-4c31-a6df-afcabaa1ed24" containerName="controller-manager" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.626197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.628232 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.686120 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.686248 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.686670 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.686895 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.691005 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.691243 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.691593 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.788263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.788895 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.789075 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.789210 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.789353 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890759 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890847 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890927 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.890964 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.892348 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.892402 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.892481 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.900710 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:20 crc kubenswrapper[4782]: I0202 10:42:20.915920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") pod \"controller-manager-6ddd6bc986-5blnv\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.015142 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.618477 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:21 crc kubenswrapper[4782]: W0202 10:42:21.621394 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e24dc2e_1431_4589_b097_598780357e04.slice/crio-8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2 WatchSource:0}: Error finding container 8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2: Status 404 returned error can't find the container with id 8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2 Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.830800 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.831229 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.830832 4782 patch_prober.go:28] interesting pod/downloads-7954f5f757-4b45h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.831325 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4b45h" podUID="e74c7e17-c70b-4637-ad47-58e1e192c52e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 10:42:21 crc kubenswrapper[4782]: I0202 10:42:21.907340 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" event={"ID":"9e24dc2e-1431-4589-b097-598780357e04","Type":"ContainerStarted","Data":"8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2"} Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.914573 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" event={"ID":"9e24dc2e-1431-4589-b097-598780357e04","Type":"ContainerStarted","Data":"66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f"} Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.916058 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.920954 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.921299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerStarted","Data":"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5"} Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.938989 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" podStartSLOduration=24.93897031 podStartE2EDuration="24.93897031s" podCreationTimestamp="2026-02-02 10:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:22.938382553 +0000 UTC m=+222.822575269" watchObservedRunningTime="2026-02-02 10:42:22.93897031 +0000 UTC m=+222.823163026" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.950911 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.950982 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.951048 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.951597 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.951669 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810" gracePeriod=600 Feb 02 10:42:22 crc kubenswrapper[4782]: I0202 10:42:22.968883 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vzzf" podStartSLOduration=6.298205455 podStartE2EDuration="59.968867244s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="2026-02-02 10:41:27.715794345 +0000 UTC m=+167.599987061" lastFinishedPulling="2026-02-02 10:42:21.386456134 +0000 UTC m=+221.270648850" observedRunningTime="2026-02-02 10:42:22.9593884 +0000 UTC m=+222.843581116" watchObservedRunningTime="2026-02-02 10:42:22.968867244 +0000 UTC m=+222.853059950" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.585477 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.585982 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.623330 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.623374 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.969775 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810" exitCode=0 Feb 02 10:42:23 crc kubenswrapper[4782]: I0202 10:42:23.969879 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810"} Feb 02 10:42:24 crc kubenswrapper[4782]: I0202 10:42:24.978714 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerStarted","Data":"85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788"} Feb 02 10:42:24 crc kubenswrapper[4782]: I0202 10:42:24.981230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25"} Feb 02 10:42:24 crc kubenswrapper[4782]: I0202 10:42:24.997345 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5852s" podStartSLOduration=6.190952776 podStartE2EDuration="1m1.997326267s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="2026-02-02 10:41:27.652168948 +0000 UTC m=+167.536361664" lastFinishedPulling="2026-02-02 10:42:23.458542439 +0000 UTC m=+223.342735155" observedRunningTime="2026-02-02 10:42:24.995162924 +0000 UTC m=+224.879355660" watchObservedRunningTime="2026-02-02 10:42:24.997326267 +0000 UTC m=+224.881518983" Feb 02 10:42:25 crc kubenswrapper[4782]: I0202 10:42:25.107473 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lxwg2" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" probeResult="failure" output=< Feb 02 10:42:25 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:42:25 crc kubenswrapper[4782]: > Feb 02 10:42:25 crc kubenswrapper[4782]: I0202 10:42:25.108277 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8vzzf" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" probeResult="failure" output=< Feb 02 10:42:25 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:42:25 crc kubenswrapper[4782]: > Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.019571 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerStarted","Data":"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.022062 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerStarted","Data":"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.024936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerStarted","Data":"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.033395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerStarted","Data":"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.037166 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerStarted","Data":"66cc69f12fda395ec4c6082c4f43f38994cb00dd4a625be34b594e4f4b899617"} Feb 02 10:42:31 crc kubenswrapper[4782]: I0202 10:42:31.837138 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4b45h" Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.053312 4782 generic.go:334] "Generic (PLEG): container finished" podID="99330299-8910-4c41-b704-120a10eb799b" containerID="8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1" exitCode=0 Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.053396 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerDied","Data":"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1"} Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.060477 4782 generic.go:334] "Generic (PLEG): container finished" podID="9beb5599-8c2d-4493-9561-cc2781d32052" containerID="47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5" exitCode=0 Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.060526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerDied","Data":"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5"} Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.064686 4782 generic.go:334] "Generic (PLEG): container finished" podID="a893973e-e0b3-426e-8bf1-7902687b7036" containerID="802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9" exitCode=0 Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.064808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerDied","Data":"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9"} Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.066399 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerDied","Data":"66cc69f12fda395ec4c6082c4f43f38994cb00dd4a625be34b594e4f4b899617"} Feb 02 10:42:32 crc kubenswrapper[4782]: I0202 10:42:32.066417 4782 generic.go:334] "Generic (PLEG): container finished" podID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerID="66cc69f12fda395ec4c6082c4f43f38994cb00dd4a625be34b594e4f4b899617" exitCode=0 Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.074420 4782 generic.go:334] "Generic (PLEG): container finished" podID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerID="46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469" exitCode=0 Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.074814 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerDied","Data":"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469"} Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.648238 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.667513 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.704847 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:42:33 crc kubenswrapper[4782]: I0202 10:42:33.722036 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:42:34 crc kubenswrapper[4782]: I0202 10:42:34.158793 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:34 crc kubenswrapper[4782]: I0202 10:42:34.159236 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:34 crc kubenswrapper[4782]: I0202 10:42:34.291184 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:35 crc kubenswrapper[4782]: I0202 10:42:35.120667 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:37 crc kubenswrapper[4782]: I0202 10:42:37.098013 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerStarted","Data":"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1"} Feb 02 10:42:37 crc kubenswrapper[4782]: I0202 10:42:37.122747 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8g5bv" podStartSLOduration=5.848415542 podStartE2EDuration="1m14.122733508s" podCreationTimestamp="2026-02-02 10:41:23 +0000 UTC" firstStartedPulling="2026-02-02 10:41:27.73293112 +0000 UTC m=+167.617123836" lastFinishedPulling="2026-02-02 10:42:36.007249086 +0000 UTC m=+235.891441802" observedRunningTime="2026-02-02 10:42:37.119364581 +0000 UTC m=+237.003557297" watchObservedRunningTime="2026-02-02 10:42:37.122733508 +0000 UTC m=+237.006926224" Feb 02 10:42:37 crc kubenswrapper[4782]: I0202 10:42:37.495599 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:42:37 crc kubenswrapper[4782]: I0202 10:42:37.495917 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5852s" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="registry-server" containerID="cri-o://85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788" gracePeriod=2 Feb 02 10:42:38 crc kubenswrapper[4782]: I0202 10:42:38.583915 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:38 crc kubenswrapper[4782]: I0202 10:42:38.584489 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" podUID="9e24dc2e-1431-4589-b097-598780357e04" containerName="controller-manager" containerID="cri-o://66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f" gracePeriod=30 Feb 02 10:42:38 crc kubenswrapper[4782]: I0202 10:42:38.676867 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:38 crc kubenswrapper[4782]: I0202 10:42:38.677094 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" podUID="2227870a-e9fb-429e-a495-cfa17761d275" containerName="route-controller-manager" containerID="cri-o://daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e" gracePeriod=30 Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.112060 4782 generic.go:334] "Generic (PLEG): container finished" podID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerID="85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788" exitCode=0 Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.112139 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerDied","Data":"85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788"} Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.114502 4782 generic.go:334] "Generic (PLEG): container finished" podID="9e24dc2e-1431-4589-b097-598780357e04" containerID="66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f" exitCode=0 Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.114589 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" event={"ID":"9e24dc2e-1431-4589-b097-598780357e04","Type":"ContainerDied","Data":"66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f"} Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.117317 4782 generic.go:334] "Generic (PLEG): container finished" podID="2227870a-e9fb-429e-a495-cfa17761d275" containerID="daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e" exitCode=0 Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.117368 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" event={"ID":"2227870a-e9fb-429e-a495-cfa17761d275","Type":"ContainerDied","Data":"daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e"} Feb 02 10:42:39 crc kubenswrapper[4782]: I0202 10:42:39.510203 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" containerID="cri-o://df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07" gracePeriod=15 Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.125528 4782 generic.go:334] "Generic (PLEG): container finished" podID="03d47200-aed2-431d-89fd-c27cdd91564f" containerID="df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07" exitCode=0 Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.125595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" event={"ID":"03d47200-aed2-431d-89fd-c27cdd91564f","Type":"ContainerDied","Data":"df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07"} Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.288206 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.292619 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.357844 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.357993 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358033 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358073 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358101 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") pod \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358141 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") pod \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358164 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") pod \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\" (UID: \"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.358217 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") pod \"9e24dc2e-1431-4589-b097-598780357e04\" (UID: \"9e24dc2e-1431-4589-b097-598780357e04\") " Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.359185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.364893 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config" (OuterVolumeSpecName: "config") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.365176 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca" (OuterVolumeSpecName: "client-ca") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.365535 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities" (OuterVolumeSpecName: "utilities") pod "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" (UID: "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.366507 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl" (OuterVolumeSpecName: "kube-api-access-hwnrl") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "kube-api-access-hwnrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.369042 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc" (OuterVolumeSpecName: "kube-api-access-vkzrc") pod "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" (UID: "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d"). InnerVolumeSpecName "kube-api-access-vkzrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.369570 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9e24dc2e-1431-4589-b097-598780357e04" (UID: "9e24dc2e-1431-4589-b097-598780357e04"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.435784 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" (UID: "2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459902 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459941 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459951 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwnrl\" (UniqueName: \"kubernetes.io/projected/9e24dc2e-1431-4589-b097-598780357e04-kube-api-access-hwnrl\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459963 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e24dc2e-1431-4589-b097-598780357e04-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459971 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e24dc2e-1431-4589-b097-598780357e04-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459982 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkzrc\" (UniqueName: \"kubernetes.io/projected/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-kube-api-access-vkzrc\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.459990 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:40 crc kubenswrapper[4782]: I0202 10:42:40.460000 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.047810 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.133140 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" event={"ID":"9e24dc2e-1431-4589-b097-598780357e04","Type":"ContainerDied","Data":"8f0a3c08c47d8edca9e135a47d16514e42acb5810c1e43010242d471b69287a2"} Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.133207 4782 scope.go:117] "RemoveContainer" containerID="66572cccd6a885c897ab4785cdc26843f714a7271af3b389ab94ee6a776c2b8f" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.133218 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ddd6bc986-5blnv" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.138915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" event={"ID":"2227870a-e9fb-429e-a495-cfa17761d275","Type":"ContainerDied","Data":"eb8b4451b7251617cfeca26bf86a321d4715359dfd467ddc355a4e63b2aa0184"} Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.138921 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.143675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5852s" event={"ID":"2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d","Type":"ContainerDied","Data":"6ef548a38f0be82eadd409d2f97034be6e36d97b107a1699fdec8cc892db1b86"} Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.143777 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5852s" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.158123 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.161699 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6ddd6bc986-5blnv"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.172142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") pod \"2227870a-e9fb-429e-a495-cfa17761d275\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.172263 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") pod \"2227870a-e9fb-429e-a495-cfa17761d275\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.172327 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") pod \"2227870a-e9fb-429e-a495-cfa17761d275\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.172374 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") pod \"2227870a-e9fb-429e-a495-cfa17761d275\" (UID: \"2227870a-e9fb-429e-a495-cfa17761d275\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.173382 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca" (OuterVolumeSpecName: "client-ca") pod "2227870a-e9fb-429e-a495-cfa17761d275" (UID: "2227870a-e9fb-429e-a495-cfa17761d275"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.173407 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config" (OuterVolumeSpecName: "config") pod "2227870a-e9fb-429e-a495-cfa17761d275" (UID: "2227870a-e9fb-429e-a495-cfa17761d275"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.175898 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.177651 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2227870a-e9fb-429e-a495-cfa17761d275" (UID: "2227870a-e9fb-429e-a495-cfa17761d275"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.177885 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25" (OuterVolumeSpecName: "kube-api-access-szf25") pod "2227870a-e9fb-429e-a495-cfa17761d275" (UID: "2227870a-e9fb-429e-a495-cfa17761d275"). InnerVolumeSpecName "kube-api-access-szf25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.178570 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5852s"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.273780 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.273830 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szf25\" (UniqueName: \"kubernetes.io/projected/2227870a-e9fb-429e-a495-cfa17761d275-kube-api-access-szf25\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.273841 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2227870a-e9fb-429e-a495-cfa17761d275-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.273853 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2227870a-e9fb-429e-a495-cfa17761d275-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.468148 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.471825 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d55bfd8b6-6x9kp"] Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.488263 4782 scope.go:117] "RemoveContainer" containerID="daea6a83dc3b43407a13be8c2a6f6cbd39fcf1feca04fd1859d88ddf144f5b8e" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.525045 4782 scope.go:117] "RemoveContainer" containerID="85ac5bab2ba43b09defe92b07b8d6a701badfe4aee8aeb3a053b9fba1e253788" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.592820 4782 scope.go:117] "RemoveContainer" containerID="2276d1082587cac8d61118d06023cf6c740850dc2c5e7490e914a2f87e0a7eb9" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.625104 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.671414 4782 scope.go:117] "RemoveContainer" containerID="e1f3c2a5262859f45791070cb15411b8e1b8e41e441cf2fae29b116544fe07c5" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.681923 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.681966 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682003 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682058 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682074 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682104 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682123 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682157 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682179 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682222 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682246 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682270 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682299 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682332 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") pod \"03d47200-aed2-431d-89fd-c27cdd91564f\" (UID: \"03d47200-aed2-431d-89fd-c27cdd91564f\") " Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.682746 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.684585 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.685518 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.685528 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.693052 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.695148 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt" (OuterVolumeSpecName: "kube-api-access-vh2jt") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "kube-api-access-vh2jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.695720 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.703578 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.707596 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.708707 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.713516 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.717967 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.718719 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.726649 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "03d47200-aed2-431d-89fd-c27cdd91564f" (UID: "03d47200-aed2-431d-89fd-c27cdd91564f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783685 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783728 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783742 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783755 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783769 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783781 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783793 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783805 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783817 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783829 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783840 4782 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03d47200-aed2-431d-89fd-c27cdd91564f-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783850 4782 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03d47200-aed2-431d-89fd-c27cdd91564f-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783862 4782 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03d47200-aed2-431d-89fd-c27cdd91564f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:41 crc kubenswrapper[4782]: I0202 10:42:41.783876 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh2jt\" (UniqueName: \"kubernetes.io/projected/03d47200-aed2-431d-89fd-c27cdd91564f-kube-api-access-vh2jt\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.152907 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.153689 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sc7kt" event={"ID":"03d47200-aed2-431d-89fd-c27cdd91564f","Type":"ContainerDied","Data":"3ff1f99d47a76aef7148a44cb594fe9fccb90137af935d28016f31e0538f0f1c"} Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.153829 4782 scope.go:117] "RemoveContainer" containerID="df2490b959607b5dd5fcd068ecdc3e142f390fcbf0477d98f074718eab612f07" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.184051 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.189499 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sc7kt"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.688618 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.689353 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="registry-server" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.689518 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="registry-server" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.689660 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="extract-utilities" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.689763 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="extract-utilities" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.689886 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.689985 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.690070 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="extract-content" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690161 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="extract-content" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.690301 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e24dc2e-1431-4589-b097-598780357e04" containerName="controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690392 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e24dc2e-1431-4589-b097-598780357e04" containerName="controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: E0202 10:42:42.690487 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2227870a-e9fb-429e-a495-cfa17761d275" containerName="route-controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690603 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2227870a-e9fb-429e-a495-cfa17761d275" containerName="route-controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690870 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" containerName="registry-server" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.690982 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2227870a-e9fb-429e-a495-cfa17761d275" containerName="route-controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.691102 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" containerName="oauth-openshift" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.691200 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e24dc2e-1431-4589-b097-598780357e04" containerName="controller-manager" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.691750 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.691929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.693198 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.696912 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.697112 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.697267 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.697459 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.701950 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.702077 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.702703 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.707258 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.729117 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.741961 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.743408 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.744602 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.745092 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.755433 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.773065 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800685 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800764 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800823 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800856 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.800887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.801054 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.801153 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.801177 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.801274 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.832424 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d47200-aed2-431d-89fd-c27cdd91564f" path="/var/lib/kubelet/pods/03d47200-aed2-431d-89fd-c27cdd91564f/volumes" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.833133 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2227870a-e9fb-429e-a495-cfa17761d275" path="/var/lib/kubelet/pods/2227870a-e9fb-429e-a495-cfa17761d275/volumes" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.834024 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d" path="/var/lib/kubelet/pods/2dbcee3c-23a7-4dfa-aa27-fccf3ff1873d/volumes" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.835415 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e24dc2e-1431-4589-b097-598780357e04" path="/var/lib/kubelet/pods/9e24dc2e-1431-4589-b097-598780357e04/volumes" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902728 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902755 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902770 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902799 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902821 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902840 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902871 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.902892 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.904707 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.905372 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.905540 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.906406 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.906812 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.912557 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.922986 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") pod \"controller-manager-58b7b45f6f-2t475\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.924700 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:42 crc kubenswrapper[4782]: I0202 10:42:42.929010 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") pod \"route-controller-manager-5fc58ff67-mghl6\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.040078 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.057264 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.199168 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerStarted","Data":"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7"} Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.207030 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerStarted","Data":"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59"} Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.231429 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g65rt" podStartSLOduration=4.5029323770000005 podStartE2EDuration="1m17.231407231s" podCreationTimestamp="2026-02-02 10:41:26 +0000 UTC" firstStartedPulling="2026-02-02 10:41:28.752479014 +0000 UTC m=+168.636671740" lastFinishedPulling="2026-02-02 10:42:41.480953878 +0000 UTC m=+241.365146594" observedRunningTime="2026-02-02 10:42:43.229108495 +0000 UTC m=+243.113301211" watchObservedRunningTime="2026-02-02 10:42:43.231407231 +0000 UTC m=+243.115599947" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.256526 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8tk99" podStartSLOduration=8.356861045 podStartE2EDuration="1m18.256504066s" podCreationTimestamp="2026-02-02 10:41:25 +0000 UTC" firstStartedPulling="2026-02-02 10:41:28.761231886 +0000 UTC m=+168.645424602" lastFinishedPulling="2026-02-02 10:42:38.660874907 +0000 UTC m=+238.545067623" observedRunningTime="2026-02-02 10:42:43.255038344 +0000 UTC m=+243.139231050" watchObservedRunningTime="2026-02-02 10:42:43.256504066 +0000 UTC m=+243.140696792" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.263162 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerStarted","Data":"6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0"} Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.275198 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerStarted","Data":"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8"} Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.283730 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xmt8t" podStartSLOduration=4.579748335 podStartE2EDuration="1m17.283715332s" podCreationTimestamp="2026-02-02 10:41:26 +0000 UTC" firstStartedPulling="2026-02-02 10:41:28.780864463 +0000 UTC m=+168.665057179" lastFinishedPulling="2026-02-02 10:42:41.48483146 +0000 UTC m=+241.369024176" observedRunningTime="2026-02-02 10:42:43.280211961 +0000 UTC m=+243.164404677" watchObservedRunningTime="2026-02-02 10:42:43.283715332 +0000 UTC m=+243.167908048" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.319767 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-khjwl" podStartSLOduration=6.08709926 podStartE2EDuration="1m18.319749543s" podCreationTimestamp="2026-02-02 10:41:25 +0000 UTC" firstStartedPulling="2026-02-02 10:41:28.799099979 +0000 UTC m=+168.683292695" lastFinishedPulling="2026-02-02 10:42:41.031750262 +0000 UTC m=+240.915942978" observedRunningTime="2026-02-02 10:42:43.311043261 +0000 UTC m=+243.195235977" watchObservedRunningTime="2026-02-02 10:42:43.319749543 +0000 UTC m=+243.203942249" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.450154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.519892 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:42:43 crc kubenswrapper[4782]: W0202 10:42:43.537437 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e30f31e_9e81_4b3f_a680_a84918f9e7ec.slice/crio-944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0 WatchSource:0}: Error finding container 944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0: Status 404 returned error can't find the container with id 944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0 Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.885970 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.886018 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:43 crc kubenswrapper[4782]: I0202 10:42:43.934418 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.292992 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" event={"ID":"46d69997-45d2-4fc5-97fe-324abd43be7c","Type":"ContainerStarted","Data":"31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9"} Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.293048 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" event={"ID":"46d69997-45d2-4fc5-97fe-324abd43be7c","Type":"ContainerStarted","Data":"6d2418109eeeba4b0106f128a727272504228d5ce1b9780ebff9ed573127420d"} Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.293353 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.297996 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" event={"ID":"1e30f31e-9e81-4b3f-a680-a84918f9e7ec","Type":"ContainerStarted","Data":"9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636"} Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.298071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" event={"ID":"1e30f31e-9e81-4b3f-a680-a84918f9e7ec","Type":"ContainerStarted","Data":"944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0"} Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.298392 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.320293 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.324554 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.330282 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" podStartSLOduration=6.330265523 podStartE2EDuration="6.330265523s" podCreationTimestamp="2026-02-02 10:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:44.328180322 +0000 UTC m=+244.212373038" watchObservedRunningTime="2026-02-02 10:42:44.330265523 +0000 UTC m=+244.214458239" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.396825 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" podStartSLOduration=6.396805575 podStartE2EDuration="6.396805575s" podCreationTimestamp="2026-02-02 10:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:44.396730692 +0000 UTC m=+244.280923408" watchObservedRunningTime="2026-02-02 10:42:44.396805575 +0000 UTC m=+244.280998291" Feb 02 10:42:44 crc kubenswrapper[4782]: I0202 10:42:44.406408 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.031546 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.031713 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.087531 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.134597 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.134960 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.180046 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.699997 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:42:46 crc kubenswrapper[4782]: I0202 10:42:46.700061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.078679 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.079349 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.398929 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.404991 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.458725 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.458957 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8g5bv" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="registry-server" containerID="cri-o://01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" gracePeriod=2 Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.736409 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g65rt" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" probeResult="failure" output=< Feb 02 10:42:47 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:42:47 crc kubenswrapper[4782]: > Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.913818 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.982171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") pod \"a893973e-e0b3-426e-8bf1-7902687b7036\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.982301 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") pod \"a893973e-e0b3-426e-8bf1-7902687b7036\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.982324 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") pod \"a893973e-e0b3-426e-8bf1-7902687b7036\" (UID: \"a893973e-e0b3-426e-8bf1-7902687b7036\") " Feb 02 10:42:47 crc kubenswrapper[4782]: I0202 10:42:47.987836 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities" (OuterVolumeSpecName: "utilities") pod "a893973e-e0b3-426e-8bf1-7902687b7036" (UID: "a893973e-e0b3-426e-8bf1-7902687b7036"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.000923 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4" (OuterVolumeSpecName: "kube-api-access-nb5v4") pod "a893973e-e0b3-426e-8bf1-7902687b7036" (UID: "a893973e-e0b3-426e-8bf1-7902687b7036"). InnerVolumeSpecName "kube-api-access-nb5v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.038097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a893973e-e0b3-426e-8bf1-7902687b7036" (UID: "a893973e-e0b3-426e-8bf1-7902687b7036"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.083955 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.084006 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb5v4\" (UniqueName: \"kubernetes.io/projected/a893973e-e0b3-426e-8bf1-7902687b7036-kube-api-access-nb5v4\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.084021 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893973e-e0b3-426e-8bf1-7902687b7036-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.116950 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xmt8t" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" probeResult="failure" output=< Feb 02 10:42:48 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:42:48 crc kubenswrapper[4782]: > Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370068 4782 generic.go:334] "Generic (PLEG): container finished" podID="a893973e-e0b3-426e-8bf1-7902687b7036" containerID="01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" exitCode=0 Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370127 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerDied","Data":"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1"} Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370186 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g5bv" event={"ID":"a893973e-e0b3-426e-8bf1-7902687b7036","Type":"ContainerDied","Data":"f8713e44a60ae45253bec2e5d10994fc19863aeccf7c6e956f5738780c8b26dd"} Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370187 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g5bv" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.370207 4782 scope.go:117] "RemoveContainer" containerID="01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.387999 4782 scope.go:117] "RemoveContainer" containerID="802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.404372 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.404416 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8g5bv"] Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.410727 4782 scope.go:117] "RemoveContainer" containerID="c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.423194 4782 scope.go:117] "RemoveContainer" containerID="01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.424521 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1\": container with ID starting with 01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1 not found: ID does not exist" containerID="01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.424753 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1"} err="failed to get container status \"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1\": rpc error: code = NotFound desc = could not find container \"01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1\": container with ID starting with 01286a2afedb32bfae7a292e969599806be21719c09d61c6f69879d22709b8d1 not found: ID does not exist" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.424882 4782 scope.go:117] "RemoveContainer" containerID="802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.425374 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9\": container with ID starting with 802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9 not found: ID does not exist" containerID="802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.425433 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9"} err="failed to get container status \"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9\": rpc error: code = NotFound desc = could not find container \"802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9\": container with ID starting with 802f28ae51e65c38767b2547d3fc9fbdf161d3e61c6a3e744e602a68e142edf9 not found: ID does not exist" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.425471 4782 scope.go:117] "RemoveContainer" containerID="c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.426305 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c\": container with ID starting with c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c not found: ID does not exist" containerID="c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.426456 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c"} err="failed to get container status \"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c\": rpc error: code = NotFound desc = could not find container \"c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c\": container with ID starting with c33cd738835a3312846866cf5dc7b1d9612aa55d5b9565677e13f823bd48c58c not found: ID does not exist" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.695698 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-wfb2c"] Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.695931 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="registry-server" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.695943 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="registry-server" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.695951 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="extract-utilities" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.695957 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="extract-utilities" Feb 02 10:42:48 crc kubenswrapper[4782]: E0202 10:42:48.695978 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="extract-content" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.695985 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="extract-content" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.696082 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" containerName="registry-server" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.696549 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.702767 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.702843 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.705534 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.705983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706052 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706116 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706232 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706333 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.706415 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.707571 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.707828 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.720906 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.723223 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.727459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-wfb2c"] Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.731065 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.737437 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795005 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rgm6\" (UniqueName: \"kubernetes.io/projected/c29682a5-f95a-4209-a484-db8524d68df6-kube-api-access-9rgm6\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795087 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795107 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-audit-policies\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795131 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c29682a5-f95a-4209-a484-db8524d68df6-audit-dir\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795166 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795182 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795203 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795223 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795242 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795262 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795281 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795300 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.795318 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.827932 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a893973e-e0b3-426e-8bf1-7902687b7036" path="/var/lib/kubelet/pods/a893973e-e0b3-426e-8bf1-7902687b7036/volumes" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896787 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896833 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rgm6\" (UniqueName: \"kubernetes.io/projected/c29682a5-f95a-4209-a484-db8524d68df6-kube-api-access-9rgm6\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896865 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-audit-policies\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c29682a5-f95a-4209-a484-db8524d68df6-audit-dir\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896949 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.896988 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897010 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897030 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897048 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.897104 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.898907 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.901101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.901697 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c29682a5-f95a-4209-a484-db8524d68df6-audit-dir\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.902084 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-audit-policies\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.904590 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-login\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.906203 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.906734 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.909016 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-error\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.910059 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-session\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.910394 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.910767 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.911611 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.912039 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c29682a5-f95a-4209-a484-db8524d68df6-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:48 crc kubenswrapper[4782]: I0202 10:42:48.923634 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rgm6\" (UniqueName: \"kubernetes.io/projected/c29682a5-f95a-4209-a484-db8524d68df6-kube-api-access-9rgm6\") pod \"oauth-openshift-58d4f98775-wfb2c\" (UID: \"c29682a5-f95a-4209-a484-db8524d68df6\") " pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:49 crc kubenswrapper[4782]: I0202 10:42:49.020701 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:49 crc kubenswrapper[4782]: I0202 10:42:49.447115 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d4f98775-wfb2c"] Feb 02 10:42:49 crc kubenswrapper[4782]: W0202 10:42:49.452659 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc29682a5_f95a_4209_a484_db8524d68df6.slice/crio-e5a103f75ba691620f68c01e77a73d6f80b417ab770263fca777203baf3d6328 WatchSource:0}: Error finding container e5a103f75ba691620f68c01e77a73d6f80b417ab770263fca777203baf3d6328: Status 404 returned error can't find the container with id e5a103f75ba691620f68c01e77a73d6f80b417ab770263fca777203baf3d6328 Feb 02 10:42:49 crc kubenswrapper[4782]: I0202 10:42:49.855936 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:42:49 crc kubenswrapper[4782]: I0202 10:42:49.856405 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-khjwl" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="registry-server" containerID="cri-o://e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" gracePeriod=2 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.338069 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384490 4782 generic.go:334] "Generic (PLEG): container finished" podID="99330299-8910-4c41-b704-120a10eb799b" containerID="e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" exitCode=0 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384554 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khjwl" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384582 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerDied","Data":"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8"} Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384680 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khjwl" event={"ID":"99330299-8910-4c41-b704-120a10eb799b","Type":"ContainerDied","Data":"a77f63d55d27418e43d1dec8a78bc759af36972ea35d4cdd887b4e0dd5624442"} Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.384706 4782 scope.go:117] "RemoveContainer" containerID="e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.389607 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" event={"ID":"c29682a5-f95a-4209-a484-db8524d68df6","Type":"ContainerStarted","Data":"d68bb677536422b81d61d2032951e7962b3a067a30743b7a9786e94e9d33bbf4"} Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.389727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" event={"ID":"c29682a5-f95a-4209-a484-db8524d68df6","Type":"ContainerStarted","Data":"e5a103f75ba691620f68c01e77a73d6f80b417ab770263fca777203baf3d6328"} Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.390129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.395607 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.402868 4782 scope.go:117] "RemoveContainer" containerID="8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.421679 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") pod \"99330299-8910-4c41-b704-120a10eb799b\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.421759 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") pod \"99330299-8910-4c41-b704-120a10eb799b\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.421814 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") pod \"99330299-8910-4c41-b704-120a10eb799b\" (UID: \"99330299-8910-4c41-b704-120a10eb799b\") " Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.424867 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities" (OuterVolumeSpecName: "utilities") pod "99330299-8910-4c41-b704-120a10eb799b" (UID: "99330299-8910-4c41-b704-120a10eb799b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.430881 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff" (OuterVolumeSpecName: "kube-api-access-2r7ff") pod "99330299-8910-4c41-b704-120a10eb799b" (UID: "99330299-8910-4c41-b704-120a10eb799b"). InnerVolumeSpecName "kube-api-access-2r7ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.433461 4782 scope.go:117] "RemoveContainer" containerID="4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.459555 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99330299-8910-4c41-b704-120a10eb799b" (UID: "99330299-8910-4c41-b704-120a10eb799b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.462458 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58d4f98775-wfb2c" podStartSLOduration=36.462431034 podStartE2EDuration="36.462431034s" podCreationTimestamp="2026-02-02 10:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:42:50.423526151 +0000 UTC m=+250.307718867" watchObservedRunningTime="2026-02-02 10:42:50.462431034 +0000 UTC m=+250.346623750" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.465588 4782 scope.go:117] "RemoveContainer" containerID="e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.467814 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8\": container with ID starting with e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8 not found: ID does not exist" containerID="e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.467848 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8"} err="failed to get container status \"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8\": rpc error: code = NotFound desc = could not find container \"e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8\": container with ID starting with e2e747d70541f05dcfaa790b9a1963fa9fa3583b791efc5145a3eed19e5ea7b8 not found: ID does not exist" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.467870 4782 scope.go:117] "RemoveContainer" containerID="8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.468085 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1\": container with ID starting with 8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1 not found: ID does not exist" containerID="8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.468110 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1"} err="failed to get container status \"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1\": rpc error: code = NotFound desc = could not find container \"8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1\": container with ID starting with 8a2fd5a1d26e874a6800d89c96cc56d4beb8b17c41b6bbadb8c4b1054b7e8ba1 not found: ID does not exist" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.468124 4782 scope.go:117] "RemoveContainer" containerID="4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.468288 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4\": container with ID starting with 4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4 not found: ID does not exist" containerID="4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.468309 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4"} err="failed to get container status \"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4\": rpc error: code = NotFound desc = could not find container \"4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4\": container with ID starting with 4552708a4e701a796f5721b8113e200d9629e895c4011cc06e8b6cc3535870a4 not found: ID does not exist" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.523825 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.523869 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99330299-8910-4c41-b704-120a10eb799b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.523882 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r7ff\" (UniqueName: \"kubernetes.io/projected/99330299-8910-4c41-b704-120a10eb799b-kube-api-access-2r7ff\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.719332 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.727817 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-khjwl"] Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.827830 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99330299-8910-4c41-b704-120a10eb799b" path="/var/lib/kubelet/pods/99330299-8910-4c41-b704-120a10eb799b/volumes" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904077 4782 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.904337 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="registry-server" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904351 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="registry-server" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.904372 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="extract-content" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904378 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="extract-content" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.904391 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="extract-utilities" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904398 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="extract-utilities" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904498 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="99330299-8910-4c41-b704-120a10eb799b" containerName="registry-server" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.904842 4782 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.905121 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.905282 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.905855 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.906027 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.906081 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.906213 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" gracePeriod=15 Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.907953 4782 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908187 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908198 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908214 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908220 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908232 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908373 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908381 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908393 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908399 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908405 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908413 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908420 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908433 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908439 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908533 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908542 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908550 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908558 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908565 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908571 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908578 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 10:42:50 crc kubenswrapper[4782]: E0202 10:42:50.908721 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:50 crc kubenswrapper[4782]: I0202 10:42:50.908729 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.001221 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.018101 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.018195 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.030998 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031171 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031307 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031335 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031392 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031413 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031494 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.031542 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133354 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133389 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133430 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133455 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133445 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133500 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133496 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133522 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133540 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133587 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133653 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133681 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.133689 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.295983 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:42:51 crc kubenswrapper[4782]: W0202 10:42:51.318170 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1bda597b2ac46775872565ce942832ca2ca030579b777223cb961f0828f30e65 WatchSource:0}: Error finding container 1bda597b2ac46775872565ce942832ca2ca030579b777223cb961f0828f30e65: Status 404 returned error can't find the container with id 1bda597b2ac46775872565ce942832ca2ca030579b777223cb961f0828f30e65 Feb 02 10:42:51 crc kubenswrapper[4782]: E0202 10:42:51.321251 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189067f8adb7269c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,LastTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.399193 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.400588 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401442 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" exitCode=0 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401552 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" exitCode=0 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401641 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" exitCode=0 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401538 4782 scope.go:117] "RemoveContainer" containerID="b089d9f482be084f3c2be3bc369e38cef6607d91af0c82b338da5c29883f6057" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.401738 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" exitCode=2 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.404180 4782 generic.go:334] "Generic (PLEG): container finished" podID="bf2939f4-fa35-4f01-a896-2ddc746ac111" containerID="33a3c62f71f1956073f3a08b721e64058cac19abb9f4f54ee5048a1701d7cade" exitCode=0 Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.404234 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf2939f4-fa35-4f01-a896-2ddc746ac111","Type":"ContainerDied","Data":"33a3c62f71f1956073f3a08b721e64058cac19abb9f4f54ee5048a1701d7cade"} Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.404903 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.405174 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.405491 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:51 crc kubenswrapper[4782]: I0202 10:42:51.406090 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1bda597b2ac46775872565ce942832ca2ca030579b777223cb961f0828f30e65"} Feb 02 10:42:51 crc kubenswrapper[4782]: E0202 10:42:51.697677 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189067f8adb7269c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,LastTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.001344 4782 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.001422 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.419596 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.431356 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1"} Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.431463 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.431859 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.432363 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.527022 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:42:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:42:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:42:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:42:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:16bd4f1f638e2804c94376e7aeb23a5f7c4d4454daea701355b4ddc9cf56c32b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:60ba1ea91ee5da37bae2691ab5afcbce9a0a9a358b560cf9dfa3d0ed31d0f68d\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1677305094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2a90cf243fbfd094eb63d7a2a33273de7c2f1514b6cd1c79c41877afea08b6fb\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ed24521cd932d4c0868705817ce9245137b311c52f233ce6070bc1d8c801494b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201985265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2670356312bbb840d7febc1ea21dc5e4918a25688063c071665b3750c5c57fc4\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:afd352e0300c7d4fcdd48a3b0ee053b3a6f1f3be3e8dd47ee68a17be62d779d9\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1191129845},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.527587 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.527924 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.528134 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.528306 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: E0202 10:42:52.528319 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.793121 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.794187 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.794556 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.795045 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.858901 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") pod \"bf2939f4-fa35-4f01-a896-2ddc746ac111\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.858953 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") pod \"bf2939f4-fa35-4f01-a896-2ddc746ac111\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859078 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") pod \"bf2939f4-fa35-4f01-a896-2ddc746ac111\" (UID: \"bf2939f4-fa35-4f01-a896-2ddc746ac111\") " Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf2939f4-fa35-4f01-a896-2ddc746ac111" (UID: "bf2939f4-fa35-4f01-a896-2ddc746ac111"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859281 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf2939f4-fa35-4f01-a896-2ddc746ac111" (UID: "bf2939f4-fa35-4f01-a896-2ddc746ac111"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859580 4782 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.859606 4782 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2939f4-fa35-4f01-a896-2ddc746ac111-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.864872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf2939f4-fa35-4f01-a896-2ddc746ac111" (UID: "bf2939f4-fa35-4f01-a896-2ddc746ac111"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:42:52 crc kubenswrapper[4782]: I0202 10:42:52.960933 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2939f4-fa35-4f01-a896-2ddc746ac111-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.378394 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.380046 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.380680 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.382402 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.382600 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.439290 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.440844 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" exitCode=0 Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.440943 4782 scope.go:117] "RemoveContainer" containerID="cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.441045 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.444230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bf2939f4-fa35-4f01-a896-2ddc746ac111","Type":"ContainerDied","Data":"acb92178b080f16f9482f40e0b16c2c17b6094a867d22c2de5c7014e8aa3b4cd"} Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.444294 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acb92178b080f16f9482f40e0b16c2c17b6094a867d22c2de5c7014e8aa3b4cd" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.444330 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.459144 4782 scope.go:117] "RemoveContainer" containerID="f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.465198 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.465472 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.465768 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474381 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474477 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474502 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474880 4782 scope.go:117] "RemoveContainer" containerID="9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.474997 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.475037 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.475104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.486728 4782 scope.go:117] "RemoveContainer" containerID="1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.497063 4782 scope.go:117] "RemoveContainer" containerID="66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.509720 4782 scope.go:117] "RemoveContainer" containerID="2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.524987 4782 scope.go:117] "RemoveContainer" containerID="cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.525492 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\": container with ID starting with cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60 not found: ID does not exist" containerID="cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.525529 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60"} err="failed to get container status \"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\": rpc error: code = NotFound desc = could not find container \"cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60\": container with ID starting with cd62da31b65707d98011292c190f6f44ab2e60bd1339f47cc289d0b445425b60 not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.525550 4782 scope.go:117] "RemoveContainer" containerID="f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.525841 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\": container with ID starting with f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805 not found: ID does not exist" containerID="f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.525861 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805"} err="failed to get container status \"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\": rpc error: code = NotFound desc = could not find container \"f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805\": container with ID starting with f26b0186a9067ba0a405e849b58fc2f39e93d443ddf69ebd0d4fd4877f45e805 not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.525877 4782 scope.go:117] "RemoveContainer" containerID="9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.526227 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\": container with ID starting with 9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded not found: ID does not exist" containerID="9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526250 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded"} err="failed to get container status \"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\": rpc error: code = NotFound desc = could not find container \"9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded\": container with ID starting with 9de866cf92cbd982802a4bfa9c7f6b9839c69efb72d01f3f52e65e2355d47ded not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526267 4782 scope.go:117] "RemoveContainer" containerID="1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.526511 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\": container with ID starting with 1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8 not found: ID does not exist" containerID="1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526536 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8"} err="failed to get container status \"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\": rpc error: code = NotFound desc = could not find container \"1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8\": container with ID starting with 1a81a5434992af16888e8ed5ec9555e57dd4485dd6c396816421ad07c181dae8 not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526554 4782 scope.go:117] "RemoveContainer" containerID="66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.526809 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\": container with ID starting with 66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0 not found: ID does not exist" containerID="66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526841 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0"} err="failed to get container status \"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\": rpc error: code = NotFound desc = could not find container \"66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0\": container with ID starting with 66dc97daee138ae6c8403d57638f6bd3cc1c5a08ce7e1631129017dc7046ebc0 not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.526859 4782 scope.go:117] "RemoveContainer" containerID="2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e" Feb 02 10:42:53 crc kubenswrapper[4782]: E0202 10:42:53.527081 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\": container with ID starting with 2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e not found: ID does not exist" containerID="2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.527106 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e"} err="failed to get container status \"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\": rpc error: code = NotFound desc = could not find container \"2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e\": container with ID starting with 2007971a446f38230bf5d4edb87654965a64588ffdf936d843aba6997fa6740e not found: ID does not exist" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.576299 4782 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.576345 4782 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.576361 4782 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.755503 4782 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.756088 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:53 crc kubenswrapper[4782]: I0202 10:42:53.756386 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:54 crc kubenswrapper[4782]: I0202 10:42:54.831709 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.746194 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.746964 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.747331 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.747832 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.783469 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.783947 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.784208 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:56 crc kubenswrapper[4782]: I0202 10:42:56.784380 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.123394 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.123805 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.124180 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.124483 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.124777 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.171998 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.172547 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.172909 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.173211 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:42:57 crc kubenswrapper[4782]: I0202 10:42:57.173897 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.824620 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.825750 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.826127 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.826435 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.901034 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.901388 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.901771 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.902090 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.902416 4782 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:00 crc kubenswrapper[4782]: I0202 10:43:00.902451 4782 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 10:43:00 crc kubenswrapper[4782]: E0202 10:43:00.902736 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Feb 02 10:43:01 crc kubenswrapper[4782]: E0202 10:43:01.103198 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Feb 02 10:43:01 crc kubenswrapper[4782]: E0202 10:43:01.503778 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Feb 02 10:43:01 crc kubenswrapper[4782]: E0202 10:43:01.698898 4782 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189067f8adb7269c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,LastTimestamp:2026-02-02 10:42:51.320616604 +0000 UTC m=+251.204809320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.304934 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.888375 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:43:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:43:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:43:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T10:43:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:16bd4f1f638e2804c94376e7aeb23a5f7c4d4454daea701355b4ddc9cf56c32b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:60ba1ea91ee5da37bae2691ab5afcbce9a0a9a358b560cf9dfa3d0ed31d0f68d\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1677305094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:2a90cf243fbfd094eb63d7a2a33273de7c2f1514b6cd1c79c41877afea08b6fb\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ed24521cd932d4c0868705817ce9245137b311c52f233ce6070bc1d8c801494b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201985265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2670356312bbb840d7febc1ea21dc5e4918a25688063c071665b3750c5c57fc4\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:afd352e0300c7d4fcdd48a3b0ee053b3a6f1f3be3e8dd47ee68a17be62d779d9\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1191129845},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.888927 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.889164 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.889438 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.889788 4782 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:02 crc kubenswrapper[4782]: E0202 10:43:02.889812 4782 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 10:43:03 crc kubenswrapper[4782]: E0202 10:43:03.906343 4782 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.505895 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.506038 4782 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25" exitCode=1 Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.506072 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25"} Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.506548 4782 scope.go:117] "RemoveContainer" containerID="18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.507822 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.508306 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.508567 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.509032 4782 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.509570 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.820492 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.821764 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.822197 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.822742 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.823213 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.823604 4782 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.836632 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.836801 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:04 crc kubenswrapper[4782]: E0202 10:43:04.837340 4782 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:04 crc kubenswrapper[4782]: I0202 10:43:04.837833 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.515939 4782 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6c958a5f5b0c5b0a64b5c2e8839ba9e407ef1a2c983bea4f44d941bfd7ed3dd9" exitCode=0 Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.516026 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6c958a5f5b0c5b0a64b5c2e8839ba9e407ef1a2c983bea4f44d941bfd7ed3dd9"} Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.516060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5bd6663a15dd44020f8f80ee1c6a99e758e8d0c2617c942ebd4288fdbd3d6c77"} Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.516563 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.516610 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.517807 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: E0202 10:43:05.517822 4782 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.518560 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.519113 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.519624 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.520065 4782 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.521381 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.521439 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3"} Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.522248 4782 status_manager.go:851] "Failed to get status for pod" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" pod="openshift-marketplace/redhat-operators-g65rt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-g65rt\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.522622 4782 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.523105 4782 status_manager.go:851] "Failed to get status for pod" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.523557 4782 status_manager.go:851] "Failed to get status for pod" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" pod="openshift-marketplace/redhat-operators-xmt8t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-xmt8t\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:05 crc kubenswrapper[4782]: I0202 10:43:05.523857 4782 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Feb 02 10:43:06 crc kubenswrapper[4782]: I0202 10:43:06.532143 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"40ad35bd4d242e0e22102e73d8a02ee88a90963b98e5cd12d0824efcc577f8ef"} Feb 02 10:43:06 crc kubenswrapper[4782]: I0202 10:43:06.532667 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2be182a941ac386c312d4a418c453f60fa6a1c72b325472f3f007a986836972c"} Feb 02 10:43:06 crc kubenswrapper[4782]: I0202 10:43:06.532684 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a493f4d3dc5e3fdd9c12b7667c0f2ac5904235b90709c24285fac24b62c98fa4"} Feb 02 10:43:06 crc kubenswrapper[4782]: I0202 10:43:06.532699 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f59e237bd46bca2e6b43377878fa4e82f3ddb0350ef33698316ec62af34a75ec"} Feb 02 10:43:07 crc kubenswrapper[4782]: I0202 10:43:07.539740 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f49c8d1f9ce3432a691c7f218589635842fc347053e803f8273378ac7857984c"} Feb 02 10:43:07 crc kubenswrapper[4782]: I0202 10:43:07.540094 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:07 crc kubenswrapper[4782]: I0202 10:43:07.540059 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:07 crc kubenswrapper[4782]: I0202 10:43:07.540117 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:09 crc kubenswrapper[4782]: I0202 10:43:09.838400 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:09 crc kubenswrapper[4782]: I0202 10:43:09.838900 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:09 crc kubenswrapper[4782]: I0202 10:43:09.843521 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.551023 4782 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.576444 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.576477 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.585943 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:12 crc kubenswrapper[4782]: I0202 10:43:12.659050 4782 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="481e4a35-7272-4866-bd4e-0c00e1a57e4d" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.375338 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.376582 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.376738 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.401039 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.585753 4782 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.587770 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cfc52d94-656d-4294-b105-0f83d22c9664" Feb 02 10:43:13 crc kubenswrapper[4782]: I0202 10:43:13.589747 4782 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="481e4a35-7272-4866-bd4e-0c00e1a57e4d" Feb 02 10:43:22 crc kubenswrapper[4782]: I0202 10:43:22.315855 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 10:43:22 crc kubenswrapper[4782]: I0202 10:43:22.419932 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.353348 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.376568 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.376653 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.392920 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.470064 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.653733 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.754818 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.838134 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 10:43:23 crc kubenswrapper[4782]: I0202 10:43:23.842731 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.089194 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.203517 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.205255 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.223936 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.514460 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.636689 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.727455 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.729162 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.789281 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.898402 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.949999 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 10:43:24 crc kubenswrapper[4782]: I0202 10:43:24.988732 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.033515 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.076354 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.192167 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.200022 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.223383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.371465 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.427032 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.484579 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.525067 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.571004 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.721241 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.876728 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.936635 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 10:43:25 crc kubenswrapper[4782]: I0202 10:43:25.937341 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.189274 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.266024 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.371922 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.452508 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.460807 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.489184 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.499173 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.544811 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.634954 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.649269 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.728670 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.855768 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.858572 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 10:43:26 crc kubenswrapper[4782]: I0202 10:43:26.993068 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.032260 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.044218 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.048106 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.128938 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.255858 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.287494 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.309739 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.328274 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.407630 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.433393 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.479937 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.556962 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.570807 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.580265 4782 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.582414 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.582395747 podStartE2EDuration="37.582395747s" podCreationTimestamp="2026-02-02 10:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:43:12.569608527 +0000 UTC m=+272.453801263" watchObservedRunningTime="2026-02-02 10:43:27.582395747 +0000 UTC m=+287.466588463" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.587384 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.587434 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.591676 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.621325 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.621310544 podStartE2EDuration="15.621310544s" podCreationTimestamp="2026-02-02 10:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:43:27.604150377 +0000 UTC m=+287.488343093" watchObservedRunningTime="2026-02-02 10:43:27.621310544 +0000 UTC m=+287.505503260" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.663959 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.689823 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.690352 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.781519 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.850034 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.955734 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.956342 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 10:43:27 crc kubenswrapper[4782]: I0202 10:43:27.986509 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.023746 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.101211 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.144280 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.206387 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.246013 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.247861 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.389655 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.394327 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.549862 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.642278 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.709948 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.715067 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.735007 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.762276 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.793795 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.810713 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.871132 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.885942 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.888206 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 10:43:28 crc kubenswrapper[4782]: I0202 10:43:28.967759 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.027504 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.039778 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.062890 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.099845 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.183758 4782 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.211911 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.242506 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.293142 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.327759 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.328224 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.330604 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.398362 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.432162 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.442596 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.496716 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.588169 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.596564 4782 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.644172 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.682931 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.722306 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.781372 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.781372 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.797926 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.816145 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.859963 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.888275 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.914721 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.974284 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.982210 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 10:43:29 crc kubenswrapper[4782]: I0202 10:43:29.982926 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.041923 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.118137 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.124189 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.187743 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.209811 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.228404 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.316035 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.317474 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.344766 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.482895 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.727566 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.750037 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.767319 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.819356 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 10:43:30 crc kubenswrapper[4782]: I0202 10:43:30.844301 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.096023 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.195477 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.319905 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.321276 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.414770 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.428064 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.469866 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.494476 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.501940 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.553589 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.710353 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.720746 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.736675 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.746676 4782 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.751286 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.761465 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.764209 4782 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.797236 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.915041 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.915468 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 10:43:31 crc kubenswrapper[4782]: I0202 10:43:31.995373 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.111489 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.166030 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.206258 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.232021 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.289307 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.307937 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.437126 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.473754 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.532892 4782 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.592957 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.602611 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.607951 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.648837 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.666510 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.691739 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.797443 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.825959 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.915788 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.958091 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 10:43:32 crc kubenswrapper[4782]: I0202 10:43:32.973258 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.008420 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.017764 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.050202 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.060247 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.191334 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.250781 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.370717 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.375987 4782 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.376045 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.376093 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.376759 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.376879 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3" gracePeriod=30 Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.402771 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.490252 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.653812 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.695602 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.773071 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.825445 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.879688 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.912130 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 10:43:33 crc kubenswrapper[4782]: I0202 10:43:33.954387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.005862 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.006516 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.112812 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.122407 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.208739 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.274189 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.316894 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.369735 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.458926 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.568095 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.634953 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.699283 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.740713 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.770317 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.857585 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 10:43:34 crc kubenswrapper[4782]: I0202 10:43:34.874485 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.038584 4782 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.039080 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" gracePeriod=5 Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.357340 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.408378 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.516808 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.535416 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.650792 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.667854 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.682347 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.780126 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.780299 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.797035 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.800298 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.818002 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 10:43:35 crc kubenswrapper[4782]: I0202 10:43:35.896329 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.075130 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.170432 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.379086 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.453907 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.484424 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.495509 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.869577 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.916375 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.938461 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.947007 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:43:36 crc kubenswrapper[4782]: I0202 10:43:36.949679 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.028090 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.037843 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.042562 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.275841 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.459577 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.483197 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.494798 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.708559 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.801009 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 10:43:37 crc kubenswrapper[4782]: I0202 10:43:37.903393 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.129030 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.149337 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.329853 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.365698 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 10:43:38 crc kubenswrapper[4782]: I0202 10:43:38.400492 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 10:43:39 crc kubenswrapper[4782]: I0202 10:43:39.040424 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 10:43:39 crc kubenswrapper[4782]: I0202 10:43:39.118575 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.138927 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.209987 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.210072 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.274378 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.390976 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391098 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391116 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391200 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391134 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.391446 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.392744 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.392822 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.393098 4782 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.393117 4782 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.393125 4782 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.393134 4782 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.409914 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.494103 4782 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.622750 4782 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.781862 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.781930 4782 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" exitCode=137 Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.781980 4782 scope.go:117] "RemoveContainer" containerID="058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.782080 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.803474 4782 scope.go:117] "RemoveContainer" containerID="058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" Feb 02 10:43:40 crc kubenswrapper[4782]: E0202 10:43:40.804108 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1\": container with ID starting with 058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1 not found: ID does not exist" containerID="058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.804199 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1"} err="failed to get container status \"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1\": rpc error: code = NotFound desc = could not find container \"058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1\": container with ID starting with 058a9546cd9144aab6d700a39408bb0f48964160331f67c95dda6204a16a5fa1 not found: ID does not exist" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.830471 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.830784 4782 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.846397 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.846679 4782 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f8fefb3a-d7fb-4f51-9ea1-0a686216c819" Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.851474 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 10:43:40 crc kubenswrapper[4782]: I0202 10:43:40.851525 4782 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f8fefb3a-d7fb-4f51-9ea1-0a686216c819" Feb 02 10:43:52 crc kubenswrapper[4782]: I0202 10:43:52.847387 4782 generic.go:334] "Generic (PLEG): container finished" podID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" exitCode=0 Feb 02 10:43:52 crc kubenswrapper[4782]: I0202 10:43:52.847485 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerDied","Data":"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0"} Feb 02 10:43:52 crc kubenswrapper[4782]: I0202 10:43:52.848333 4782 scope.go:117] "RemoveContainer" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.614611 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.614703 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.875964 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerStarted","Data":"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18"} Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.876270 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:43:53 crc kubenswrapper[4782]: I0202 10:43:53.880402 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.954299 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957033 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957180 4782 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3" exitCode=137 Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"28ff8f809aa892efb59230ce281eba5df7dc1a64de78fc0b780249b88f330ba3"} Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957405 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6323010df1c369418604188acdf1da1eb55bdb99241d2d9922d246fe008c4e0"} Feb 02 10:44:03 crc kubenswrapper[4782]: I0202 10:44:03.957496 4782 scope.go:117] "RemoveContainer" containerID="18f8ad22dcd04bba0dd895d584affcd359ac6221153a18ee542dee979b9efe25" Feb 02 10:44:04 crc kubenswrapper[4782]: I0202 10:44:04.964769 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 10:44:13 crc kubenswrapper[4782]: I0202 10:44:13.375352 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:44:13 crc kubenswrapper[4782]: I0202 10:44:13.379212 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:44:13 crc kubenswrapper[4782]: I0202 10:44:13.400942 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:44:13 crc kubenswrapper[4782]: I0202 10:44:13.405782 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 10:44:20 crc kubenswrapper[4782]: I0202 10:44:20.694481 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:44:20 crc kubenswrapper[4782]: I0202 10:44:20.695299 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerName="route-controller-manager" containerID="cri-o://9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636" gracePeriod=30 Feb 02 10:44:20 crc kubenswrapper[4782]: I0202 10:44:20.700863 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:44:20 crc kubenswrapper[4782]: I0202 10:44:20.701112 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerName="controller-manager" containerID="cri-o://31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9" gracePeriod=30 Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.059600 4782 generic.go:334] "Generic (PLEG): container finished" podID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerID="9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636" exitCode=0 Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.059954 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" event={"ID":"1e30f31e-9e81-4b3f-a680-a84918f9e7ec","Type":"ContainerDied","Data":"9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636"} Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.062004 4782 generic.go:334] "Generic (PLEG): container finished" podID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerID="31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9" exitCode=0 Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.062051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" event={"ID":"46d69997-45d2-4fc5-97fe-324abd43be7c","Type":"ContainerDied","Data":"31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9"} Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.232768 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.239781 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269603 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") pod \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269691 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269750 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") pod \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269783 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269815 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") pod \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269840 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269880 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") pod \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\" (UID: \"1e30f31e-9e81-4b3f-a680-a84918f9e7ec\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.269922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") pod \"46d69997-45d2-4fc5-97fe-324abd43be7c\" (UID: \"46d69997-45d2-4fc5-97fe-324abd43be7c\") " Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.271099 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "1e30f31e-9e81-4b3f-a680-a84918f9e7ec" (UID: "1e30f31e-9e81-4b3f-a680-a84918f9e7ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.272055 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.272883 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.273595 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config" (OuterVolumeSpecName: "config") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.274769 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config" (OuterVolumeSpecName: "config") pod "1e30f31e-9e81-4b3f-a680-a84918f9e7ec" (UID: "1e30f31e-9e81-4b3f-a680-a84918f9e7ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.274092 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca" (OuterVolumeSpecName: "client-ca") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.275019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1e30f31e-9e81-4b3f-a680-a84918f9e7ec" (UID: "1e30f31e-9e81-4b3f-a680-a84918f9e7ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.279830 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw" (OuterVolumeSpecName: "kube-api-access-hd8hw") pod "1e30f31e-9e81-4b3f-a680-a84918f9e7ec" (UID: "1e30f31e-9e81-4b3f-a680-a84918f9e7ec"). InnerVolumeSpecName "kube-api-access-hd8hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.283874 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd" (OuterVolumeSpecName: "kube-api-access-886qd") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "kube-api-access-886qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.285061 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46d69997-45d2-4fc5-97fe-324abd43be7c" (UID: "46d69997-45d2-4fc5-97fe-324abd43be7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373754 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373799 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd8hw\" (UniqueName: \"kubernetes.io/projected/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-kube-api-access-hd8hw\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373817 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373831 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373842 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46d69997-45d2-4fc5-97fe-324abd43be7c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373852 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46d69997-45d2-4fc5-97fe-324abd43be7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373862 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e30f31e-9e81-4b3f-a680-a84918f9e7ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.373875 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-886qd\" (UniqueName: \"kubernetes.io/projected/46d69997-45d2-4fc5-97fe-324abd43be7c-kube-api-access-886qd\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756016 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:21 crc kubenswrapper[4782]: E0202 10:44:21.756247 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756259 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:44:21 crc kubenswrapper[4782]: E0202 10:44:21.756274 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerName="controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756280 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerName="controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: E0202 10:44:21.756290 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" containerName="installer" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756296 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" containerName="installer" Feb 02 10:44:21 crc kubenswrapper[4782]: E0202 10:44:21.756306 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerName="route-controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756312 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerName="route-controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756452 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756467 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" containerName="route-controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756482 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2939f4-fa35-4f01-a896-2ddc746ac111" containerName="installer" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756494 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" containerName="controller-manager" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.756928 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.769606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778186 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778406 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778520 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778605 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.778704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.879382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.879823 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.880697 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.880729 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.880758 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.881305 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.881486 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.882680 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.885771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:21 crc kubenswrapper[4782]: I0202 10:44:21.899182 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") pod \"controller-manager-d48c458cb-pbzkm\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.068527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" event={"ID":"46d69997-45d2-4fc5-97fe-324abd43be7c","Type":"ContainerDied","Data":"6d2418109eeeba4b0106f128a727272504228d5ce1b9780ebff9ed573127420d"} Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.068896 4782 scope.go:117] "RemoveContainer" containerID="31803234b3e6e5e02aacf6f32b0728560be1f2f91def8457a24f11c30c6f30d9" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.068546 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58b7b45f6f-2t475" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.070708 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" event={"ID":"1e30f31e-9e81-4b3f-a680-a84918f9e7ec","Type":"ContainerDied","Data":"944e504f7664a3445683cb50cf91015cca415c853c6ae2c6623b8ce4bf506ee0"} Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.070751 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.077967 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.085387 4782 scope.go:117] "RemoveContainer" containerID="9b3a89ab30ae80001691fec459871214ee3a74122b7f05bc4d16d051b4bde636" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.099911 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.105190 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58b7b45f6f-2t475"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.127177 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.145879 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc58ff67-mghl6"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.493546 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:22 crc kubenswrapper[4782]: W0202 10:44:22.502627 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f6cd214_bca2_452a_825d_cf4b07972e83.slice/crio-570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7 WatchSource:0}: Error finding container 570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7: Status 404 returned error can't find the container with id 570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7 Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.762106 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.762986 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.765591 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.765755 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.766249 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.766324 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.767531 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.767544 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.781745 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht"] Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.789002 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z9cm\" (UniqueName: \"kubernetes.io/projected/46e9aaad-cb96-4f13-bc66-f88eacc38399-kube-api-access-8z9cm\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.789085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46e9aaad-cb96-4f13-bc66-f88eacc38399-serving-cert\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.789169 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-config\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.789197 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-client-ca\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.836745 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e30f31e-9e81-4b3f-a680-a84918f9e7ec" path="/var/lib/kubelet/pods/1e30f31e-9e81-4b3f-a680-a84918f9e7ec/volumes" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.837514 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d69997-45d2-4fc5-97fe-324abd43be7c" path="/var/lib/kubelet/pods/46d69997-45d2-4fc5-97fe-324abd43be7c/volumes" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.890846 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46e9aaad-cb96-4f13-bc66-f88eacc38399-serving-cert\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.890923 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-config\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.890952 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-client-ca\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.890983 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z9cm\" (UniqueName: \"kubernetes.io/projected/46e9aaad-cb96-4f13-bc66-f88eacc38399-kube-api-access-8z9cm\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.892541 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-client-ca\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.893277 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46e9aaad-cb96-4f13-bc66-f88eacc38399-config\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.898869 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46e9aaad-cb96-4f13-bc66-f88eacc38399-serving-cert\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:22 crc kubenswrapper[4782]: I0202 10:44:22.922849 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z9cm\" (UniqueName: \"kubernetes.io/projected/46e9aaad-cb96-4f13-bc66-f88eacc38399-kube-api-access-8z9cm\") pod \"route-controller-manager-dc64fcccb-25wht\" (UID: \"46e9aaad-cb96-4f13-bc66-f88eacc38399\") " pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.077462 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" event={"ID":"8f6cd214-bca2-452a-825d-cf4b07972e83","Type":"ContainerStarted","Data":"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731"} Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.077517 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" event={"ID":"8f6cd214-bca2-452a-825d-cf4b07972e83","Type":"ContainerStarted","Data":"570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7"} Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.077963 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.083005 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.085717 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.116471 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" podStartSLOduration=3.116450823 podStartE2EDuration="3.116450823s" podCreationTimestamp="2026-02-02 10:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:44:23.111654353 +0000 UTC m=+342.995847069" watchObservedRunningTime="2026-02-02 10:44:23.116450823 +0000 UTC m=+343.000643539" Feb 02 10:44:23 crc kubenswrapper[4782]: I0202 10:44:23.398605 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht"] Feb 02 10:44:24 crc kubenswrapper[4782]: I0202 10:44:24.087781 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" event={"ID":"46e9aaad-cb96-4f13-bc66-f88eacc38399","Type":"ContainerStarted","Data":"95aa230da32bcb1bf08dbbd63fdafe110f519061353eb56e2e29745ccbbdc49c"} Feb 02 10:44:24 crc kubenswrapper[4782]: I0202 10:44:24.088161 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" event={"ID":"46e9aaad-cb96-4f13-bc66-f88eacc38399","Type":"ContainerStarted","Data":"2f0d5845b18f9b34db9716ca13cd2e1236c9dbb6b300d2b1ce8ec998fb6301b9"} Feb 02 10:44:24 crc kubenswrapper[4782]: I0202 10:44:24.110476 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" podStartSLOduration=4.11045906 podStartE2EDuration="4.11045906s" podCreationTimestamp="2026-02-02 10:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:44:24.109123151 +0000 UTC m=+343.993315867" watchObservedRunningTime="2026-02-02 10:44:24.11045906 +0000 UTC m=+343.994651776" Feb 02 10:44:25 crc kubenswrapper[4782]: I0202 10:44:25.092203 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:25 crc kubenswrapper[4782]: I0202 10:44:25.096408 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dc64fcccb-25wht" Feb 02 10:44:35 crc kubenswrapper[4782]: I0202 10:44:35.831055 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:44:35 crc kubenswrapper[4782]: I0202 10:44:35.832079 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xmt8t" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" containerID="cri-o://6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0" gracePeriod=2 Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.155714 4782 generic.go:334] "Generic (PLEG): container finished" podID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerID="6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0" exitCode=0 Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.155802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerDied","Data":"6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0"} Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.290941 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.482192 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") pod \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.482299 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") pod \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.482394 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") pod \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\" (UID: \"213698f8-d1b6-489f-8fc4-a69583d4fc2e\") " Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.483496 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities" (OuterVolumeSpecName: "utilities") pod "213698f8-d1b6-489f-8fc4-a69583d4fc2e" (UID: "213698f8-d1b6-489f-8fc4-a69583d4fc2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.490947 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9" (OuterVolumeSpecName: "kube-api-access-h72b9") pod "213698f8-d1b6-489f-8fc4-a69583d4fc2e" (UID: "213698f8-d1b6-489f-8fc4-a69583d4fc2e"). InnerVolumeSpecName "kube-api-access-h72b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.584354 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.584387 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h72b9\" (UniqueName: \"kubernetes.io/projected/213698f8-d1b6-489f-8fc4-a69583d4fc2e-kube-api-access-h72b9\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.601651 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "213698f8-d1b6-489f-8fc4-a69583d4fc2e" (UID: "213698f8-d1b6-489f-8fc4-a69583d4fc2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:44:36 crc kubenswrapper[4782]: I0202 10:44:36.687133 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/213698f8-d1b6-489f-8fc4-a69583d4fc2e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.163475 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xmt8t" event={"ID":"213698f8-d1b6-489f-8fc4-a69583d4fc2e","Type":"ContainerDied","Data":"c20f4c43562c9a26701d05b9c48459ab9215c0f89e3d7636a6006f20e7c4c9aa"} Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.163526 4782 scope.go:117] "RemoveContainer" containerID="6939dd2b86b314873208fc7be8f608a39a08ac73dd21d303f68aa3eaddde0aa0" Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.164709 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xmt8t" Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.180163 4782 scope.go:117] "RemoveContainer" containerID="66cc69f12fda395ec4c6082c4f43f38994cb00dd4a625be34b594e4f4b899617" Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.181567 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.186564 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xmt8t"] Feb 02 10:44:37 crc kubenswrapper[4782]: I0202 10:44:37.200126 4782 scope.go:117] "RemoveContainer" containerID="e85622bd784d09d56836a615239db13244c2d5b26841db53fd14e1ec1665771e" Feb 02 10:44:38 crc kubenswrapper[4782]: I0202 10:44:38.556888 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:38 crc kubenswrapper[4782]: I0202 10:44:38.557919 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerName="controller-manager" containerID="cri-o://f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" gracePeriod=30 Feb 02 10:44:38 crc kubenswrapper[4782]: I0202 10:44:38.828123 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" path="/var/lib/kubelet/pods/213698f8-d1b6-489f-8fc4-a69583d4fc2e/volumes" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.061936 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128365 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128425 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128485 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128511 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.128532 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") pod \"8f6cd214-bca2-452a-825d-cf4b07972e83\" (UID: \"8f6cd214-bca2-452a-825d-cf4b07972e83\") " Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.129249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.129296 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config" (OuterVolumeSpecName: "config") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.130172 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.133082 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm" (OuterVolumeSpecName: "kube-api-access-6grgm") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "kube-api-access-6grgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.134130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f6cd214-bca2-452a-825d-cf4b07972e83" (UID: "8f6cd214-bca2-452a-825d-cf4b07972e83"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175724 4782 generic.go:334] "Generic (PLEG): container finished" podID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerID="f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" exitCode=0 Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175789 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" event={"ID":"8f6cd214-bca2-452a-825d-cf4b07972e83","Type":"ContainerDied","Data":"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731"} Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175828 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" event={"ID":"8f6cd214-bca2-452a-825d-cf4b07972e83","Type":"ContainerDied","Data":"570cce5fe0dff1b59e4f7acf454c500f5d6f86ed2664600b29e8b219ef83edb7"} Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175855 4782 scope.go:117] "RemoveContainer" containerID="f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.175993 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d48c458cb-pbzkm" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.193923 4782 scope.go:117] "RemoveContainer" containerID="f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.194264 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731\": container with ID starting with f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731 not found: ID does not exist" containerID="f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.194306 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731"} err="failed to get container status \"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731\": rpc error: code = NotFound desc = could not find container \"f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731\": container with ID starting with f9d2ce5bce084aab4b72960678b5ce4040b976b46a53b08fa57b4f4008735731 not found: ID does not exist" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.207079 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.210589 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d48c458cb-pbzkm"] Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230035 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230070 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6grgm\" (UniqueName: \"kubernetes.io/projected/8f6cd214-bca2-452a-825d-cf4b07972e83-kube-api-access-6grgm\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230084 4782 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230095 4782 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f6cd214-bca2-452a-825d-cf4b07972e83-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.230107 4782 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f6cd214-bca2-452a-825d-cf4b07972e83-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.770853 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26"] Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.772252 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="extract-content" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.772331 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="extract-content" Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.772393 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="extract-utilities" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.772504 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="extract-utilities" Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.772615 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerName="controller-manager" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.772728 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerName="controller-manager" Feb 02 10:44:39 crc kubenswrapper[4782]: E0202 10:44:39.772983 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.773044 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.773206 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="213698f8-d1b6-489f-8fc4-a69583d4fc2e" containerName="registry-server" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.773282 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" containerName="controller-manager" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.774306 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777095 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777168 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777196 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777238 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.777387 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.778392 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.787431 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.793423 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26"] Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.837737 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-proxy-ca-bundles\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.837848 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f7507c-7ecb-41de-9fd5-937b5961db89-serving-cert\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.837908 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-config\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.837956 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-client-ca\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.838005 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwh4\" (UniqueName: \"kubernetes.io/projected/81f7507c-7ecb-41de-9fd5-937b5961db89-kube-api-access-ggwh4\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939171 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-client-ca\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939310 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwh4\" (UniqueName: \"kubernetes.io/projected/81f7507c-7ecb-41de-9fd5-937b5961db89-kube-api-access-ggwh4\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-proxy-ca-bundles\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939489 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f7507c-7ecb-41de-9fd5-937b5961db89-serving-cert\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.939539 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-config\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.940257 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-client-ca\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.940538 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-proxy-ca-bundles\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.942236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f7507c-7ecb-41de-9fd5-937b5961db89-config\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.958008 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f7507c-7ecb-41de-9fd5-937b5961db89-serving-cert\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:39 crc kubenswrapper[4782]: I0202 10:44:39.962494 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwh4\" (UniqueName: \"kubernetes.io/projected/81f7507c-7ecb-41de-9fd5-937b5961db89-kube-api-access-ggwh4\") pod \"controller-manager-69b5cb4cd4-2mp26\" (UID: \"81f7507c-7ecb-41de-9fd5-937b5961db89\") " pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:40 crc kubenswrapper[4782]: I0202 10:44:40.090382 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:40 crc kubenswrapper[4782]: I0202 10:44:40.530699 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26"] Feb 02 10:44:40 crc kubenswrapper[4782]: I0202 10:44:40.829915 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f6cd214-bca2-452a-825d-cf4b07972e83" path="/var/lib/kubelet/pods/8f6cd214-bca2-452a-825d-cf4b07972e83/volumes" Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.222053 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" event={"ID":"81f7507c-7ecb-41de-9fd5-937b5961db89","Type":"ContainerStarted","Data":"d28a38552b32912efe369435160fbb3919332e8ea0da6319eb5d16dda7efed1f"} Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.222773 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.222798 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" event={"ID":"81f7507c-7ecb-41de-9fd5-937b5961db89","Type":"ContainerStarted","Data":"f64721fccd7d689be4ab56aa0504d2d4f511727958f3c9c2c4660bc300d39cc5"} Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.227563 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" Feb 02 10:44:41 crc kubenswrapper[4782]: I0202 10:44:41.247253 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69b5cb4cd4-2mp26" podStartSLOduration=3.247234778 podStartE2EDuration="3.247234778s" podCreationTimestamp="2026-02-02 10:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:44:41.244316743 +0000 UTC m=+361.128509459" watchObservedRunningTime="2026-02-02 10:44:41.247234778 +0000 UTC m=+361.131427484" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.240794 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nc9"] Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.242195 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.271372 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nc9"] Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-registry-tls\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396293 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8584c52-c370-4f38-9965-6938b9cd2892-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-trusted-ca\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396359 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhnk\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-kube-api-access-6jhnk\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396378 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-registry-certificates\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396400 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-bound-sa-token\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396426 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.396444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8584c52-c370-4f38-9965-6938b9cd2892-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.424261 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8584c52-c370-4f38-9965-6938b9cd2892-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499603 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-trusted-ca\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499656 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jhnk\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-kube-api-access-6jhnk\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499689 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-registry-certificates\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499727 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-bound-sa-token\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.499976 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8584c52-c370-4f38-9965-6938b9cd2892-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.500081 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8584c52-c370-4f38-9965-6938b9cd2892-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.501236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-registry-certificates\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.501321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-registry-tls\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.503181 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8584c52-c370-4f38-9965-6938b9cd2892-trusted-ca\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.509597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-registry-tls\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.509603 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8584c52-c370-4f38-9965-6938b9cd2892-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.520507 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jhnk\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-kube-api-access-6jhnk\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.522280 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8584c52-c370-4f38-9965-6938b9cd2892-bound-sa-token\") pod \"image-registry-66df7c8f76-94nc9\" (UID: \"f8584c52-c370-4f38-9965-6938b9cd2892\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:44 crc kubenswrapper[4782]: I0202 10:44:44.560237 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.040428 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nc9"] Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.241995 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" event={"ID":"f8584c52-c370-4f38-9965-6938b9cd2892","Type":"ContainerStarted","Data":"9850426c326ae10a179fd5d9dbc57b29cf75a1754cb725c0c324ae48e8e1d56a"} Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.242280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" event={"ID":"f8584c52-c370-4f38-9965-6938b9cd2892","Type":"ContainerStarted","Data":"29287eafff5fb685e0991101d7901dc013c298b37742e02278ffb2d1ba8fedbc"} Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.242662 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:44:45 crc kubenswrapper[4782]: I0202 10:44:45.274830 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" podStartSLOduration=1.2748101090000001 podStartE2EDuration="1.274810109s" podCreationTimestamp="2026-02-02 10:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:44:45.261717958 +0000 UTC m=+365.145910674" watchObservedRunningTime="2026-02-02 10:44:45.274810109 +0000 UTC m=+365.159002835" Feb 02 10:44:52 crc kubenswrapper[4782]: I0202 10:44:52.951478 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:44:52 crc kubenswrapper[4782]: I0202 10:44:52.952112 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.201714 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.203231 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.206541 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.207052 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.224294 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.248348 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.253736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.254051 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.355095 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.355684 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.355877 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.356957 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.374088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.374132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") pod \"collect-profiles-29500485-l8mbc\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.521927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:00 crc kubenswrapper[4782]: I0202 10:45:00.958741 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 10:45:00 crc kubenswrapper[4782]: W0202 10:45:00.964701 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd9b99a_c3f7_4153_b2ac_769ca0ba88aa.slice/crio-db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf WatchSource:0}: Error finding container db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf: Status 404 returned error can't find the container with id db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf Feb 02 10:45:01 crc kubenswrapper[4782]: I0202 10:45:01.332932 4782 generic.go:334] "Generic (PLEG): container finished" podID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" containerID="dca388f48a923df889d89ab8317f39bb415b2f6f2849925bcccbaf4b1c7171f9" exitCode=0 Feb 02 10:45:01 crc kubenswrapper[4782]: I0202 10:45:01.333100 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" event={"ID":"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa","Type":"ContainerDied","Data":"dca388f48a923df889d89ab8317f39bb415b2f6f2849925bcccbaf4b1c7171f9"} Feb 02 10:45:01 crc kubenswrapper[4782]: I0202 10:45:01.333227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" event={"ID":"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa","Type":"ContainerStarted","Data":"db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf"} Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.658476 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.787206 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") pod \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.788071 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") pod \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.788142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") pod \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\" (UID: \"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa\") " Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.788005 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" (UID: "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.792751 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" (UID: "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.794491 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t" (OuterVolumeSpecName: "kube-api-access-v558t") pod "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" (UID: "6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa"). InnerVolumeSpecName "kube-api-access-v558t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.889435 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.889487 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:02 crc kubenswrapper[4782]: I0202 10:45:02.889502 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v558t\" (UniqueName: \"kubernetes.io/projected/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa-kube-api-access-v558t\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:03 crc kubenswrapper[4782]: I0202 10:45:03.352696 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" event={"ID":"6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa","Type":"ContainerDied","Data":"db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf"} Feb 02 10:45:03 crc kubenswrapper[4782]: I0202 10:45:03.352761 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db793699e90cb720b127ccccdc17ecd9823eb6a11729e1f908b4ec3673e9addf" Feb 02 10:45:03 crc kubenswrapper[4782]: I0202 10:45:03.352811 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc" Feb 02 10:45:04 crc kubenswrapper[4782]: I0202 10:45:04.566303 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-94nc9" Feb 02 10:45:04 crc kubenswrapper[4782]: I0202 10:45:04.621086 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.422935 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.423689 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8vzzf" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" containerID="cri-o://05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.442910 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.446293 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxwg2" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" containerID="cri-o://d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.469383 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.469698 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" containerID="cri-o://7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.474626 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.474932 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8tk99" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="registry-server" containerID="cri-o://27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.487953 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wv9v8"] Feb 02 10:45:22 crc kubenswrapper[4782]: E0202 10:45:22.488251 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.488273 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.488413 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" containerName="collect-profiles" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.488952 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.495138 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.495343 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g65rt" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" containerID="cri-o://e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" gracePeriod=30 Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.503154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wv9v8"] Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.541508 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.541727 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.541782 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zbz\" (UniqueName: \"kubernetes.io/projected/a044a9d0-6c97-46c4-980a-e5d9940e9f74-kube-api-access-j6zbz\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.657247 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.657320 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.657358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zbz\" (UniqueName: \"kubernetes.io/projected/a044a9d0-6c97-46c4-980a-e5d9940e9f74-kube-api-access-j6zbz\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.659373 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.670489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a044a9d0-6c97-46c4-980a-e5d9940e9f74-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.676306 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zbz\" (UniqueName: \"kubernetes.io/projected/a044a9d0-6c97-46c4-980a-e5d9940e9f74-kube-api-access-j6zbz\") pod \"marketplace-operator-79b997595-wv9v8\" (UID: \"a044a9d0-6c97-46c4-980a-e5d9940e9f74\") " pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.870054 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.889154 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.951358 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:45:22 crc kubenswrapper[4782]: I0202 10:45:22.951422 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.062176 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") pod \"10039944-73fc-417b-925f-48a2985c277d\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.062285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") pod \"10039944-73fc-417b-925f-48a2985c277d\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.062336 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") pod \"10039944-73fc-417b-925f-48a2985c277d\" (UID: \"10039944-73fc-417b-925f-48a2985c277d\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.063201 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities" (OuterVolumeSpecName: "utilities") pod "10039944-73fc-417b-925f-48a2985c277d" (UID: "10039944-73fc-417b-925f-48a2985c277d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.081089 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz" (OuterVolumeSpecName: "kube-api-access-svwgz") pod "10039944-73fc-417b-925f-48a2985c277d" (UID: "10039944-73fc-417b-925f-48a2985c277d"). InnerVolumeSpecName "kube-api-access-svwgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.127173 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.142502 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.143232 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.144518 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.164137 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.164414 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svwgz\" (UniqueName: \"kubernetes.io/projected/10039944-73fc-417b-925f-48a2985c277d-kube-api-access-svwgz\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.165464 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10039944-73fc-417b-925f-48a2985c277d" (UID: "10039944-73fc-417b-925f-48a2985c277d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265064 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") pod \"83c24a27-fdbe-468f-b4cf-780c87b598ae\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265126 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") pod \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265148 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") pod \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265177 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") pod \"9beb5599-8c2d-4493-9561-cc2781d32052\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265200 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") pod \"9beb5599-8c2d-4493-9561-cc2781d32052\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265223 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") pod \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\" (UID: \"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") pod \"d9a718cd-1b6d-483f-b995-938331c7e00e\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265268 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") pod \"d9a718cd-1b6d-483f-b995-938331c7e00e\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265292 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") pod \"83c24a27-fdbe-468f-b4cf-780c87b598ae\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265318 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") pod \"9beb5599-8c2d-4493-9561-cc2781d32052\" (UID: \"9beb5599-8c2d-4493-9561-cc2781d32052\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265351 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") pod \"d9a718cd-1b6d-483f-b995-938331c7e00e\" (UID: \"d9a718cd-1b6d-483f-b995-938331c7e00e\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265369 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") pod \"83c24a27-fdbe-468f-b4cf-780c87b598ae\" (UID: \"83c24a27-fdbe-468f-b4cf-780c87b598ae\") " Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.265556 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10039944-73fc-417b-925f-48a2985c277d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.266067 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "83c24a27-fdbe-468f-b4cf-780c87b598ae" (UID: "83c24a27-fdbe-468f-b4cf-780c87b598ae"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.267347 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities" (OuterVolumeSpecName: "utilities") pod "d9a718cd-1b6d-483f-b995-938331c7e00e" (UID: "d9a718cd-1b6d-483f-b995-938331c7e00e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.270049 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities" (OuterVolumeSpecName: "utilities") pod "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" (UID: "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.271620 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "83c24a27-fdbe-468f-b4cf-780c87b598ae" (UID: "83c24a27-fdbe-468f-b4cf-780c87b598ae"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.271628 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities" (OuterVolumeSpecName: "utilities") pod "9beb5599-8c2d-4493-9561-cc2781d32052" (UID: "9beb5599-8c2d-4493-9561-cc2781d32052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.274377 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg" (OuterVolumeSpecName: "kube-api-access-zlmlg") pod "9beb5599-8c2d-4493-9561-cc2781d32052" (UID: "9beb5599-8c2d-4493-9561-cc2781d32052"). InnerVolumeSpecName "kube-api-access-zlmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.276540 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8" (OuterVolumeSpecName: "kube-api-access-9mqs8") pod "d9a718cd-1b6d-483f-b995-938331c7e00e" (UID: "d9a718cd-1b6d-483f-b995-938331c7e00e"). InnerVolumeSpecName "kube-api-access-9mqs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.290599 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q" (OuterVolumeSpecName: "kube-api-access-kf98q") pod "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" (UID: "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde"). InnerVolumeSpecName "kube-api-access-kf98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.297386 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849" (OuterVolumeSpecName: "kube-api-access-cv849") pod "83c24a27-fdbe-468f-b4cf-780c87b598ae" (UID: "83c24a27-fdbe-468f-b4cf-780c87b598ae"). InnerVolumeSpecName "kube-api-access-cv849". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.312340 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9beb5599-8c2d-4493-9561-cc2781d32052" (UID: "9beb5599-8c2d-4493-9561-cc2781d32052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.333487 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" (UID: "cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366376 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mqs8\" (UniqueName: \"kubernetes.io/projected/d9a718cd-1b6d-483f-b995-938331c7e00e-kube-api-access-9mqs8\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366405 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366414 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv849\" (UniqueName: \"kubernetes.io/projected/83c24a27-fdbe-468f-b4cf-780c87b598ae-kube-api-access-cv849\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366423 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366431 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366440 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366448 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9beb5599-8c2d-4493-9561-cc2781d32052-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366459 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf98q\" (UniqueName: \"kubernetes.io/projected/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde-kube-api-access-kf98q\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366467 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366475 4782 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83c24a27-fdbe-468f-b4cf-780c87b598ae-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.366484 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlmlg\" (UniqueName: \"kubernetes.io/projected/9beb5599-8c2d-4493-9561-cc2781d32052-kube-api-access-zlmlg\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.384400 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9a718cd-1b6d-483f-b995-938331c7e00e" (UID: "d9a718cd-1b6d-483f-b995-938331c7e00e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.438445 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wv9v8"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.467856 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9a718cd-1b6d-483f-b995-938331c7e00e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476084 4782 generic.go:334] "Generic (PLEG): container finished" podID="9beb5599-8c2d-4493-9561-cc2781d32052" containerID="27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476159 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerDied","Data":"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476190 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tk99" event={"ID":"9beb5599-8c2d-4493-9561-cc2781d32052","Type":"ContainerDied","Data":"568ce8fc0d55d9c475927a100a13079ea3c32843e1f085a43192f2b40f052173"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476207 4782 scope.go:117] "RemoveContainer" containerID="27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.476318 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tk99" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.490513 4782 generic.go:334] "Generic (PLEG): container finished" podID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerID="7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.490618 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerDied","Data":"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.490668 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" event={"ID":"83c24a27-fdbe-468f-b4cf-780c87b598ae","Type":"ContainerDied","Data":"01e32062d069a57210dfb3c4675630b56cd608a941ddd39a5505e8107646b05b"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.490759 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dsb8s" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.504767 4782 scope.go:117] "RemoveContainer" containerID="47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.506242 4782 generic.go:334] "Generic (PLEG): container finished" podID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerID="05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.506314 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vzzf" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.506322 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerDied","Data":"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.506361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vzzf" event={"ID":"cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde","Type":"ContainerDied","Data":"2157f695d84a6bf7a7c1d517b9438fd49370964e625a05cdc501c630222fe141"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.511532 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" event={"ID":"a044a9d0-6c97-46c4-980a-e5d9940e9f74","Type":"ContainerStarted","Data":"e304c06aa518c2e00ee4a2c8b84b1d6d99652e36ab28422e86846e294cf20f53"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.530699 4782 generic.go:334] "Generic (PLEG): container finished" podID="10039944-73fc-417b-925f-48a2985c277d" containerID="d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.530779 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerDied","Data":"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.530829 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxwg2" event={"ID":"10039944-73fc-417b-925f-48a2985c277d","Type":"ContainerDied","Data":"1277c93f9cf96ac2b46fd7682341d99a2d1a3ea302f1d88526054e535369a8b5"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.530938 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxwg2" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.536718 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.544297 4782 generic.go:334] "Generic (PLEG): container finished" podID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerID="e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" exitCode=0 Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.544338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerDied","Data":"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.544362 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g65rt" event={"ID":"d9a718cd-1b6d-483f-b995-938331c7e00e","Type":"ContainerDied","Data":"4bfe0b83f6a780843d1b621b1e211e51cf466967683c4406c5ee8fe51515e3e6"} Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.544422 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g65rt" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.546228 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tk99"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.546238 4782 scope.go:117] "RemoveContainer" containerID="26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.549609 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.561554 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dsb8s"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.582059 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.586289 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxwg2"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.588797 4782 scope.go:117] "RemoveContainer" containerID="27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.590990 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59\": container with ID starting with 27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59 not found: ID does not exist" containerID="27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.591054 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59"} err="failed to get container status \"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59\": rpc error: code = NotFound desc = could not find container \"27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59\": container with ID starting with 27683de59fe1d475d11ccad4a4bc71fc78cce9124a8ba6ddadca22bd5b3b2c59 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.591089 4782 scope.go:117] "RemoveContainer" containerID="47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.593542 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5\": container with ID starting with 47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5 not found: ID does not exist" containerID="47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.593586 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5"} err="failed to get container status \"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5\": rpc error: code = NotFound desc = could not find container \"47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5\": container with ID starting with 47a422f0cc0a1728dbd32be9c84459b33e85f5951ae7977459fff5ec301546e5 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.593619 4782 scope.go:117] "RemoveContainer" containerID="26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.594684 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1\": container with ID starting with 26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1 not found: ID does not exist" containerID="26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.594707 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1"} err="failed to get container status \"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1\": rpc error: code = NotFound desc = could not find container \"26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1\": container with ID starting with 26a60990edb2535483d2ce67fefae5ee030fc62b28d11ee8aacbf346a5be05e1 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.594723 4782 scope.go:117] "RemoveContainer" containerID="7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.595461 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.607195 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8vzzf"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.643365 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.643798 4782 scope.go:117] "RemoveContainer" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.649258 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g65rt"] Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.663170 4782 scope.go:117] "RemoveContainer" containerID="7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.664007 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18\": container with ID starting with 7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18 not found: ID does not exist" containerID="7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.664043 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18"} err="failed to get container status \"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18\": rpc error: code = NotFound desc = could not find container \"7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18\": container with ID starting with 7c6f28c6ab23e2b0cac464dda379a0db63254628f161ee4a1fc6725636dd5d18 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.664068 4782 scope.go:117] "RemoveContainer" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.664337 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0\": container with ID starting with 7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0 not found: ID does not exist" containerID="7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.664375 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0"} err="failed to get container status \"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0\": rpc error: code = NotFound desc = could not find container \"7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0\": container with ID starting with 7766ba0f1792fbabdd4dfd1bd9f01fc89c47b35f57865ca551d6b825e4452bd0 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.664395 4782 scope.go:117] "RemoveContainer" containerID="05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.677662 4782 scope.go:117] "RemoveContainer" containerID="00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.704161 4782 scope.go:117] "RemoveContainer" containerID="a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.716331 4782 scope.go:117] "RemoveContainer" containerID="05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.718017 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5\": container with ID starting with 05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5 not found: ID does not exist" containerID="05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718054 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5"} err="failed to get container status \"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5\": rpc error: code = NotFound desc = could not find container \"05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5\": container with ID starting with 05b04de7aee036aad1bf2a35f7544132e21559dc426cdb8b9123b5342d1855f5 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718076 4782 scope.go:117] "RemoveContainer" containerID="00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.718386 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd\": container with ID starting with 00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd not found: ID does not exist" containerID="00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718411 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd"} err="failed to get container status \"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd\": rpc error: code = NotFound desc = could not find container \"00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd\": container with ID starting with 00e2092af389b03680966cc8e710d0d6f79d522f8f8be602fad0b6a82b7428dd not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718424 4782 scope.go:117] "RemoveContainer" containerID="a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.718656 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87\": container with ID starting with a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87 not found: ID does not exist" containerID="a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718676 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87"} err="failed to get container status \"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87\": rpc error: code = NotFound desc = could not find container \"a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87\": container with ID starting with a62af3fc6fe01245144104d3fe6fdbfa8c11138189c86d32c606e302f89c3d87 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.718687 4782 scope.go:117] "RemoveContainer" containerID="d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.731946 4782 scope.go:117] "RemoveContainer" containerID="c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.745766 4782 scope.go:117] "RemoveContainer" containerID="9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.766579 4782 scope.go:117] "RemoveContainer" containerID="d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.767124 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4\": container with ID starting with d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4 not found: ID does not exist" containerID="d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767164 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4"} err="failed to get container status \"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4\": rpc error: code = NotFound desc = could not find container \"d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4\": container with ID starting with d9d48a2893d15bc0ee3b3feea15dabdcb7b5a71f1bd9719587995b71a75c1fb4 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767190 4782 scope.go:117] "RemoveContainer" containerID="c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.767464 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7\": container with ID starting with c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7 not found: ID does not exist" containerID="c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767494 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7"} err="failed to get container status \"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7\": rpc error: code = NotFound desc = could not find container \"c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7\": container with ID starting with c44bb7cb77d92459b486b13776f87d325c996f8a9b36d06145f90ec0d4cb47f7 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767514 4782 scope.go:117] "RemoveContainer" containerID="9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.767814 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e\": container with ID starting with 9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e not found: ID does not exist" containerID="9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767838 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e"} err="failed to get container status \"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e\": rpc error: code = NotFound desc = could not find container \"9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e\": container with ID starting with 9da626531f7c4e48058eb7295c3b8546c7d0b9c6e3e487d7b3b44cfe31605b9e not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.767852 4782 scope.go:117] "RemoveContainer" containerID="e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.792199 4782 scope.go:117] "RemoveContainer" containerID="46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.806433 4782 scope.go:117] "RemoveContainer" containerID="79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.824037 4782 scope.go:117] "RemoveContainer" containerID="e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.825706 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7\": container with ID starting with e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7 not found: ID does not exist" containerID="e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.826365 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7"} err="failed to get container status \"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7\": rpc error: code = NotFound desc = could not find container \"e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7\": container with ID starting with e1cc76cbefa2853cb7c51972a0b447075f16dfeb15a018a5e6a336960194aac7 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.826448 4782 scope.go:117] "RemoveContainer" containerID="46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.827950 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469\": container with ID starting with 46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469 not found: ID does not exist" containerID="46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.827983 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469"} err="failed to get container status \"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469\": rpc error: code = NotFound desc = could not find container \"46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469\": container with ID starting with 46f7bc4b2322a3c4c9b51dde44681dba7d41425a72707d63ce7bf6b09fa67469 not found: ID does not exist" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.828031 4782 scope.go:117] "RemoveContainer" containerID="79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a" Feb 02 10:45:23 crc kubenswrapper[4782]: E0202 10:45:23.828250 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a\": container with ID starting with 79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a not found: ID does not exist" containerID="79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a" Feb 02 10:45:23 crc kubenswrapper[4782]: I0202 10:45:23.828279 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a"} err="failed to get container status \"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a\": rpc error: code = NotFound desc = could not find container \"79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a\": container with ID starting with 79123b63701b446131df97ad86c3dc50da583013affff9da434e4160cc37422a not found: ID does not exist" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.239399 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vnt75"] Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240109 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240191 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240270 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240328 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240397 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240460 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240518 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240617 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240717 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240775 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240835 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.240913 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.240982 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.241394 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.241455 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.241522 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="extract-utilities" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.241583 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.241664 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.241737 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.241902 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.241985 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242041 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.242098 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242159 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.242222 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242276 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: E0202 10:45:24.242339 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242398 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="extract-content" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242547 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242615 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242705 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="10039944-73fc-417b-925f-48a2985c277d" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242766 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" containerName="marketplace-operator" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242825 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.242905 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" containerName="registry-server" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.244026 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.249123 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnt75"] Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.249177 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.386292 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-catalog-content\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.386380 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9l7\" (UniqueName: \"kubernetes.io/projected/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-kube-api-access-mz9l7\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.386422 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-utilities\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.487995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-catalog-content\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.488661 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-catalog-content\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.488896 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz9l7\" (UniqueName: \"kubernetes.io/projected/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-kube-api-access-mz9l7\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.489060 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-utilities\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.489425 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-utilities\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.507955 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz9l7\" (UniqueName: \"kubernetes.io/projected/c80d3e09-03c8-40f0-a4dd-474da2b5d31d-kube-api-access-mz9l7\") pod \"certified-operators-vnt75\" (UID: \"c80d3e09-03c8-40f0-a4dd-474da2b5d31d\") " pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.550266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" event={"ID":"a044a9d0-6c97-46c4-980a-e5d9940e9f74","Type":"ContainerStarted","Data":"ad9b3800da8e1c1a2fe4f015cda465403fab8ed21fbb019ddbe39b3cb0e75736"} Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.550526 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.555697 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.561141 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.569930 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wv9v8" podStartSLOduration=2.569912184 podStartE2EDuration="2.569912184s" podCreationTimestamp="2026-02-02 10:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:45:24.569034208 +0000 UTC m=+404.453226934" watchObservedRunningTime="2026-02-02 10:45:24.569912184 +0000 UTC m=+404.454104920" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.812988 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vnt75"] Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.830285 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10039944-73fc-417b-925f-48a2985c277d" path="/var/lib/kubelet/pods/10039944-73fc-417b-925f-48a2985c277d/volumes" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.830982 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c24a27-fdbe-468f-b4cf-780c87b598ae" path="/var/lib/kubelet/pods/83c24a27-fdbe-468f-b4cf-780c87b598ae/volumes" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.831450 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9beb5599-8c2d-4493-9561-cc2781d32052" path="/var/lib/kubelet/pods/9beb5599-8c2d-4493-9561-cc2781d32052/volumes" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.832900 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde" path="/var/lib/kubelet/pods/cf4514b1-5bb5-4e13-8c79-9bdbf62d6cde/volumes" Feb 02 10:45:24 crc kubenswrapper[4782]: I0202 10:45:24.833548 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a718cd-1b6d-483f-b995-938331c7e00e" path="/var/lib/kubelet/pods/d9a718cd-1b6d-483f-b995-938331c7e00e/volumes" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.557219 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g864k"] Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.566581 4782 generic.go:334] "Generic (PLEG): container finished" podID="c80d3e09-03c8-40f0-a4dd-474da2b5d31d" containerID="61f516602f3731fe476be806c116fc90e024d8989ff3f7d5fed5a62cf9542b16" exitCode=0 Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.573383 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerDied","Data":"61f516602f3731fe476be806c116fc90e024d8989ff3f7d5fed5a62cf9542b16"} Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.573426 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g864k"] Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.573448 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerStarted","Data":"ff5089fa53f99c39368573b21860998154a9938a6ddfbd8ae9c816812abef8f9"} Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.573556 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.578197 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.705692 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jssss\" (UniqueName: \"kubernetes.io/projected/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-kube-api-access-jssss\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.705746 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-catalog-content\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.705865 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-utilities\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.806426 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-utilities\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.806498 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jssss\" (UniqueName: \"kubernetes.io/projected/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-kube-api-access-jssss\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.806529 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-catalog-content\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.807024 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-utilities\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.807141 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-catalog-content\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.849065 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jssss\" (UniqueName: \"kubernetes.io/projected/9e046c0e-cea4-45b0-8952-1fc5edb01ff5-kube-api-access-jssss\") pod \"redhat-marketplace-g864k\" (UID: \"9e046c0e-cea4-45b0-8952-1fc5edb01ff5\") " pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:25 crc kubenswrapper[4782]: I0202 10:45:25.900579 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.364862 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g864k"] Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.438840 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h2hxh"] Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.440715 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.442974 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.476001 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h2hxh"] Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.577301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g864k" event={"ID":"9e046c0e-cea4-45b0-8952-1fc5edb01ff5","Type":"ContainerStarted","Data":"afeaf889e775c84c90e62264625391eafe4337fd0e7df118964f365a29a6f1bc"} Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.580684 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerStarted","Data":"fc52609a3149da16c5a38619b760aa6093547c958583a21e9f45474fe3f71d2b"} Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.616179 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-catalog-content\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.616269 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-utilities\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.616322 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6h4g\" (UniqueName: \"kubernetes.io/projected/fe57942f-8b6f-4400-8ed5-6fb054a514bf-kube-api-access-z6h4g\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.717229 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-catalog-content\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.717543 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-utilities\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.717674 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6h4g\" (UniqueName: \"kubernetes.io/projected/fe57942f-8b6f-4400-8ed5-6fb054a514bf-kube-api-access-z6h4g\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.717716 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-catalog-content\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.718614 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe57942f-8b6f-4400-8ed5-6fb054a514bf-utilities\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.736877 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6h4g\" (UniqueName: \"kubernetes.io/projected/fe57942f-8b6f-4400-8ed5-6fb054a514bf-kube-api-access-z6h4g\") pod \"redhat-operators-h2hxh\" (UID: \"fe57942f-8b6f-4400-8ed5-6fb054a514bf\") " pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:26 crc kubenswrapper[4782]: I0202 10:45:26.770038 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.213647 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h2hxh"] Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.586198 4782 generic.go:334] "Generic (PLEG): container finished" podID="fe57942f-8b6f-4400-8ed5-6fb054a514bf" containerID="f9e4261b90f87b00dd611e23eb7f14a088abac96b23ddc99dd09cf3667f26c8a" exitCode=0 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.586303 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerDied","Data":"f9e4261b90f87b00dd611e23eb7f14a088abac96b23ddc99dd09cf3667f26c8a"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.586623 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerStarted","Data":"2787194a0614bc3148c3f8072417b5f370332a751d21f01a014fe0bfe3996685"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.588333 4782 generic.go:334] "Generic (PLEG): container finished" podID="c80d3e09-03c8-40f0-a4dd-474da2b5d31d" containerID="fc52609a3149da16c5a38619b760aa6093547c958583a21e9f45474fe3f71d2b" exitCode=0 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.588384 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerDied","Data":"fc52609a3149da16c5a38619b760aa6093547c958583a21e9f45474fe3f71d2b"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.589827 4782 generic.go:334] "Generic (PLEG): container finished" podID="9e046c0e-cea4-45b0-8952-1fc5edb01ff5" containerID="8611d0e0a18f4a738d6f6d216f194b7b2b1c4072e2a2b0191656b491c454306f" exitCode=0 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.589851 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g864k" event={"ID":"9e046c0e-cea4-45b0-8952-1fc5edb01ff5","Type":"ContainerDied","Data":"8611d0e0a18f4a738d6f6d216f194b7b2b1c4072e2a2b0191656b491c454306f"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.845205 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qsk6j"] Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.848738 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.851208 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:27.852609 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qsk6j"] Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.036062 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrjg\" (UniqueName: \"kubernetes.io/projected/a435172a-875e-47e1-8c17-fad9fe2a0baf-kube-api-access-thrjg\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.036344 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-catalog-content\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.036468 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-utilities\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.138089 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-catalog-content\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.138749 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-utilities\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.138792 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrjg\" (UniqueName: \"kubernetes.io/projected/a435172a-875e-47e1-8c17-fad9fe2a0baf-kube-api-access-thrjg\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.138801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-catalog-content\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.140278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a435172a-875e-47e1-8c17-fad9fe2a0baf-utilities\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.167472 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrjg\" (UniqueName: \"kubernetes.io/projected/a435172a-875e-47e1-8c17-fad9fe2a0baf-kube-api-access-thrjg\") pod \"community-operators-qsk6j\" (UID: \"a435172a-875e-47e1-8c17-fad9fe2a0baf\") " pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.176145 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.597315 4782 generic.go:334] "Generic (PLEG): container finished" podID="9e046c0e-cea4-45b0-8952-1fc5edb01ff5" containerID="0f87ef34a4c142ee9578627591df9988cc347d3787bbd82c2fe49f783a811331" exitCode=0 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.597408 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g864k" event={"ID":"9e046c0e-cea4-45b0-8952-1fc5edb01ff5","Type":"ContainerDied","Data":"0f87ef34a4c142ee9578627591df9988cc347d3787bbd82c2fe49f783a811331"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.611087 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerStarted","Data":"c25a1063092a0a0420687ef450e7dc52ad1df8a8413a4d89dee1f84103f7cabf"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.619368 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vnt75" event={"ID":"c80d3e09-03c8-40f0-a4dd-474da2b5d31d","Type":"ContainerStarted","Data":"efbad1331c2e169e99b07c3e8d7e0fbdb4d636fc3efabac4cbf098cbb5737308"} Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.659756 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qsk6j"] Feb 02 10:45:28 crc kubenswrapper[4782]: W0202 10:45:28.662494 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda435172a_875e_47e1_8c17_fad9fe2a0baf.slice/crio-05aaa695eb00f561e816b44a427614ebb18292a239e8a7416e3550beaecf0a47 WatchSource:0}: Error finding container 05aaa695eb00f561e816b44a427614ebb18292a239e8a7416e3550beaecf0a47: Status 404 returned error can't find the container with id 05aaa695eb00f561e816b44a427614ebb18292a239e8a7416e3550beaecf0a47 Feb 02 10:45:28 crc kubenswrapper[4782]: I0202 10:45:28.669041 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vnt75" podStartSLOduration=2.099065735 podStartE2EDuration="4.669025696s" podCreationTimestamp="2026-02-02 10:45:24 +0000 UTC" firstStartedPulling="2026-02-02 10:45:25.569321495 +0000 UTC m=+405.453514211" lastFinishedPulling="2026-02-02 10:45:28.139281456 +0000 UTC m=+408.023474172" observedRunningTime="2026-02-02 10:45:28.667327376 +0000 UTC m=+408.551520092" watchObservedRunningTime="2026-02-02 10:45:28.669025696 +0000 UTC m=+408.553218402" Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.625780 4782 generic.go:334] "Generic (PLEG): container finished" podID="fe57942f-8b6f-4400-8ed5-6fb054a514bf" containerID="c25a1063092a0a0420687ef450e7dc52ad1df8a8413a4d89dee1f84103f7cabf" exitCode=0 Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.625874 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerDied","Data":"c25a1063092a0a0420687ef450e7dc52ad1df8a8413a4d89dee1f84103f7cabf"} Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.627523 4782 generic.go:334] "Generic (PLEG): container finished" podID="a435172a-875e-47e1-8c17-fad9fe2a0baf" containerID="84b8ee0c8c9e2b8ec98cf705db5b59c5234a4d37ef17d79b9bc5c6142f49f253" exitCode=0 Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.627598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerDied","Data":"84b8ee0c8c9e2b8ec98cf705db5b59c5234a4d37ef17d79b9bc5c6142f49f253"} Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.627620 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerStarted","Data":"05aaa695eb00f561e816b44a427614ebb18292a239e8a7416e3550beaecf0a47"} Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.631782 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g864k" event={"ID":"9e046c0e-cea4-45b0-8952-1fc5edb01ff5","Type":"ContainerStarted","Data":"c63e40b9e6d47fcb72990ea699c85cc90b96c9ad32d91f1259e6471f557a5f90"} Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.676800 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g864k" podStartSLOduration=2.8348502399999997 podStartE2EDuration="4.676776291s" podCreationTimestamp="2026-02-02 10:45:25 +0000 UTC" firstStartedPulling="2026-02-02 10:45:27.590973484 +0000 UTC m=+407.475166200" lastFinishedPulling="2026-02-02 10:45:29.432899535 +0000 UTC m=+409.317092251" observedRunningTime="2026-02-02 10:45:29.672750123 +0000 UTC m=+409.556942849" watchObservedRunningTime="2026-02-02 10:45:29.676776291 +0000 UTC m=+409.560969007" Feb 02 10:45:29 crc kubenswrapper[4782]: I0202 10:45:29.696343 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerName="registry" containerID="cri-o://9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642" gracePeriod=30 Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.638023 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2hxh" event={"ID":"fe57942f-8b6f-4400-8ed5-6fb054a514bf","Type":"ContainerStarted","Data":"d97812050a5a3a18f199c753639f735caa0c1f383b60507a26e47f8e24a93519"} Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.640061 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerStarted","Data":"b669957a2b5cac24387adaca579bdd46f439a001c4236c44963844410702fef3"} Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.642493 4782 generic.go:334] "Generic (PLEG): container finished" podID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerID="9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642" exitCode=0 Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.643074 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" event={"ID":"4877e80d-a6fe-4503-a64c-398815efa1e0","Type":"ContainerDied","Data":"9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642"} Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.669837 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h2hxh" podStartSLOduration=2.156567142 podStartE2EDuration="4.669820375s" podCreationTimestamp="2026-02-02 10:45:26 +0000 UTC" firstStartedPulling="2026-02-02 10:45:27.587800201 +0000 UTC m=+407.471992917" lastFinishedPulling="2026-02-02 10:45:30.101053434 +0000 UTC m=+409.985246150" observedRunningTime="2026-02-02 10:45:30.666714534 +0000 UTC m=+410.550907250" watchObservedRunningTime="2026-02-02 10:45:30.669820375 +0000 UTC m=+410.554013091" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.704164 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.872937 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873332 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873376 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873419 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873441 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873616 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873639 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.873685 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") pod \"4877e80d-a6fe-4503-a64c-398815efa1e0\" (UID: \"4877e80d-a6fe-4503-a64c-398815efa1e0\") " Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.874205 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.874322 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.879318 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.879925 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp" (OuterVolumeSpecName: "kube-api-access-f5khp") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "kube-api-access-f5khp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.880297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.886011 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.904387 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.910077 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4877e80d-a6fe-4503-a64c-398815efa1e0" (UID: "4877e80d-a6fe-4503-a64c-398815efa1e0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974804 4782 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974855 4782 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974870 4782 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974882 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5khp\" (UniqueName: \"kubernetes.io/projected/4877e80d-a6fe-4503-a64c-398815efa1e0-kube-api-access-f5khp\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974893 4782 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4877e80d-a6fe-4503-a64c-398815efa1e0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974903 4782 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4877e80d-a6fe-4503-a64c-398815efa1e0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:30 crc kubenswrapper[4782]: I0202 10:45:30.974913 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4877e80d-a6fe-4503-a64c-398815efa1e0-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.649289 4782 generic.go:334] "Generic (PLEG): container finished" podID="a435172a-875e-47e1-8c17-fad9fe2a0baf" containerID="b669957a2b5cac24387adaca579bdd46f439a001c4236c44963844410702fef3" exitCode=0 Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.649391 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerDied","Data":"b669957a2b5cac24387adaca579bdd46f439a001c4236c44963844410702fef3"} Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.651938 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.652025 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jxz27" event={"ID":"4877e80d-a6fe-4503-a64c-398815efa1e0","Type":"ContainerDied","Data":"a55c72e5f15ff42bfcfbbd5f83cbfe22e092ae45221bb6158bb15a9d235221ed"} Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.652077 4782 scope.go:117] "RemoveContainer" containerID="9a008fac97312f4f6086015007e8d88cd2830bbd1da80845daf3cde642284642" Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.689812 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:45:31 crc kubenswrapper[4782]: I0202 10:45:31.695997 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jxz27"] Feb 02 10:45:32 crc kubenswrapper[4782]: I0202 10:45:32.658299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qsk6j" event={"ID":"a435172a-875e-47e1-8c17-fad9fe2a0baf","Type":"ContainerStarted","Data":"9ab967490199a05243dc87fa6db99670429507a31ad469d631692722f93b54e6"} Feb 02 10:45:32 crc kubenswrapper[4782]: I0202 10:45:32.676405 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qsk6j" podStartSLOduration=3.179938633 podStartE2EDuration="5.676388564s" podCreationTimestamp="2026-02-02 10:45:27 +0000 UTC" firstStartedPulling="2026-02-02 10:45:29.630148707 +0000 UTC m=+409.514341423" lastFinishedPulling="2026-02-02 10:45:32.126598638 +0000 UTC m=+412.010791354" observedRunningTime="2026-02-02 10:45:32.674077286 +0000 UTC m=+412.558270012" watchObservedRunningTime="2026-02-02 10:45:32.676388564 +0000 UTC m=+412.560581280" Feb 02 10:45:32 crc kubenswrapper[4782]: I0202 10:45:32.827795 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" path="/var/lib/kubelet/pods/4877e80d-a6fe-4503-a64c-398815efa1e0/volumes" Feb 02 10:45:34 crc kubenswrapper[4782]: I0202 10:45:34.561329 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:34 crc kubenswrapper[4782]: I0202 10:45:34.561692 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:34 crc kubenswrapper[4782]: I0202 10:45:34.606004 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:34 crc kubenswrapper[4782]: I0202 10:45:34.721498 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vnt75" Feb 02 10:45:35 crc kubenswrapper[4782]: I0202 10:45:35.901355 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:35 crc kubenswrapper[4782]: I0202 10:45:35.902283 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:35 crc kubenswrapper[4782]: I0202 10:45:35.952606 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:36 crc kubenswrapper[4782]: I0202 10:45:36.716127 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g864k" Feb 02 10:45:36 crc kubenswrapper[4782]: I0202 10:45:36.771080 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:36 crc kubenswrapper[4782]: I0202 10:45:36.771150 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:36 crc kubenswrapper[4782]: I0202 10:45:36.808236 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:37 crc kubenswrapper[4782]: I0202 10:45:37.728656 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h2hxh" Feb 02 10:45:38 crc kubenswrapper[4782]: I0202 10:45:38.177352 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:38 crc kubenswrapper[4782]: I0202 10:45:38.177404 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:38 crc kubenswrapper[4782]: I0202 10:45:38.216523 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:38 crc kubenswrapper[4782]: I0202 10:45:38.728131 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qsk6j" Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.951021 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.951601 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.951665 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.952234 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:45:52 crc kubenswrapper[4782]: I0202 10:45:52.952293 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25" gracePeriod=600 Feb 02 10:45:53 crc kubenswrapper[4782]: I0202 10:45:53.758788 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25" exitCode=0 Feb 02 10:45:53 crc kubenswrapper[4782]: I0202 10:45:53.758882 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25"} Feb 02 10:45:53 crc kubenswrapper[4782]: I0202 10:45:53.759297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc"} Feb 02 10:45:53 crc kubenswrapper[4782]: I0202 10:45:53.759325 4782 scope.go:117] "RemoveContainer" containerID="362bf5c8cbdc4831ca6ceecf9323bad1217467a40b0e8dd491098c1ae4f42810" Feb 02 10:48:22 crc kubenswrapper[4782]: I0202 10:48:22.951524 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:48:22 crc kubenswrapper[4782]: I0202 10:48:22.952080 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:48:52 crc kubenswrapper[4782]: I0202 10:48:52.951199 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:48:52 crc kubenswrapper[4782]: I0202 10:48:52.951880 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.951364 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.952837 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.952940 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.953603 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:49:22 crc kubenswrapper[4782]: I0202 10:49:22.953685 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc" gracePeriod=600 Feb 02 10:49:23 crc kubenswrapper[4782]: I0202 10:49:23.903098 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc" exitCode=0 Feb 02 10:49:23 crc kubenswrapper[4782]: I0202 10:49:23.903202 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc"} Feb 02 10:49:23 crc kubenswrapper[4782]: I0202 10:49:23.903598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c"} Feb 02 10:49:23 crc kubenswrapper[4782]: I0202 10:49:23.903623 4782 scope.go:117] "RemoveContainer" containerID="68181eab99dccd23b4af9f91ccc576ac3321f9b931dcb6edbebeb0694cfecf25" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.236388 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk"] Feb 02 10:50:14 crc kubenswrapper[4782]: E0202 10:50:14.237039 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerName="registry" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.237052 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerName="registry" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.237136 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4877e80d-a6fe-4503-a64c-398815efa1e0" containerName="registry" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.237480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.243769 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vcnls"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.245490 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.258574 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.258940 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.259105 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rrcfn" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.268276 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vcnls"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.271403 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-69dk7" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.291937 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.292461 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hn2p\" (UniqueName: \"kubernetes.io/projected/9890a2a1-2fba-4553-87eb-0b70bdc93730-kube-api-access-4hn2p\") pod \"cert-manager-858654f9db-vcnls\" (UID: \"9890a2a1-2fba-4553-87eb-0b70bdc93730\") " pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.292543 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrzr\" (UniqueName: \"kubernetes.io/projected/49141326-2954-4715-aaa9-86641ac21fa9-kube-api-access-znrzr\") pod \"cert-manager-cainjector-cf98fcc89-jdfqk\" (UID: \"49141326-2954-4715-aaa9-86641ac21fa9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.301975 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9h9rr"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.302874 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.306452 4782 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-pbkcb" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.314980 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9h9rr"] Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.393507 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq9ws\" (UniqueName: \"kubernetes.io/projected/d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a-kube-api-access-qq9ws\") pod \"cert-manager-webhook-687f57d79b-9h9rr\" (UID: \"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.393567 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hn2p\" (UniqueName: \"kubernetes.io/projected/9890a2a1-2fba-4553-87eb-0b70bdc93730-kube-api-access-4hn2p\") pod \"cert-manager-858654f9db-vcnls\" (UID: \"9890a2a1-2fba-4553-87eb-0b70bdc93730\") " pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.393608 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrzr\" (UniqueName: \"kubernetes.io/projected/49141326-2954-4715-aaa9-86641ac21fa9-kube-api-access-znrzr\") pod \"cert-manager-cainjector-cf98fcc89-jdfqk\" (UID: \"49141326-2954-4715-aaa9-86641ac21fa9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.415109 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrzr\" (UniqueName: \"kubernetes.io/projected/49141326-2954-4715-aaa9-86641ac21fa9-kube-api-access-znrzr\") pod \"cert-manager-cainjector-cf98fcc89-jdfqk\" (UID: \"49141326-2954-4715-aaa9-86641ac21fa9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.415185 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hn2p\" (UniqueName: \"kubernetes.io/projected/9890a2a1-2fba-4553-87eb-0b70bdc93730-kube-api-access-4hn2p\") pod \"cert-manager-858654f9db-vcnls\" (UID: \"9890a2a1-2fba-4553-87eb-0b70bdc93730\") " pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.495085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq9ws\" (UniqueName: \"kubernetes.io/projected/d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a-kube-api-access-qq9ws\") pod \"cert-manager-webhook-687f57d79b-9h9rr\" (UID: \"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.517200 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq9ws\" (UniqueName: \"kubernetes.io/projected/d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a-kube-api-access-qq9ws\") pod \"cert-manager-webhook-687f57d79b-9h9rr\" (UID: \"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.563294 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.590211 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vcnls" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.619143 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.941122 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-9h9rr"] Feb 02 10:50:14 crc kubenswrapper[4782]: W0202 10:50:14.948210 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ae0a8e_231d_4be5_aa1e_ac35dfbabe4a.slice/crio-343e38fd9583ccda08f3cd35e81f105cdf5d7f9d72f6fb69bf292fdb84dc3486 WatchSource:0}: Error finding container 343e38fd9583ccda08f3cd35e81f105cdf5d7f9d72f6fb69bf292fdb84dc3486: Status 404 returned error can't find the container with id 343e38fd9583ccda08f3cd35e81f105cdf5d7f9d72f6fb69bf292fdb84dc3486 Feb 02 10:50:14 crc kubenswrapper[4782]: I0202 10:50:14.950881 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.037457 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk"] Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.044026 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vcnls"] Feb 02 10:50:15 crc kubenswrapper[4782]: W0202 10:50:15.050128 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9890a2a1_2fba_4553_87eb_0b70bdc93730.slice/crio-048aee2dddc375f67743a24499dbd9e88dcd89da987d0715d6244e523b6aad7a WatchSource:0}: Error finding container 048aee2dddc375f67743a24499dbd9e88dcd89da987d0715d6244e523b6aad7a: Status 404 returned error can't find the container with id 048aee2dddc375f67743a24499dbd9e88dcd89da987d0715d6244e523b6aad7a Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.187820 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" event={"ID":"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a","Type":"ContainerStarted","Data":"343e38fd9583ccda08f3cd35e81f105cdf5d7f9d72f6fb69bf292fdb84dc3486"} Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.188660 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" event={"ID":"49141326-2954-4715-aaa9-86641ac21fa9","Type":"ContainerStarted","Data":"bdc99273e475184f0654c5ba31ce5697adfa1718bffcd1ef5c777b079a52d243"} Feb 02 10:50:15 crc kubenswrapper[4782]: I0202 10:50:15.189499 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vcnls" event={"ID":"9890a2a1-2fba-4553-87eb-0b70bdc93730","Type":"ContainerStarted","Data":"048aee2dddc375f67743a24499dbd9e88dcd89da987d0715d6244e523b6aad7a"} Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.211790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vcnls" event={"ID":"9890a2a1-2fba-4553-87eb-0b70bdc93730","Type":"ContainerStarted","Data":"a22776d6b0780defeafd9f3d25867a3920ffe35dea12b0ad3f3730a8ba4093bc"} Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.214163 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" event={"ID":"d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a","Type":"ContainerStarted","Data":"d44a928ff3ed9741cc47878fdbf1670147f808e1f7af38695ee1a61aa60ed2d9"} Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.214192 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.215687 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" event={"ID":"49141326-2954-4715-aaa9-86641ac21fa9","Type":"ContainerStarted","Data":"1f0178062cf20cda3075d9c6fa639b92518c00383d4805cdd887f2d8ec38fa99"} Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.232593 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vcnls" podStartSLOduration=2.070846003 podStartE2EDuration="5.232572617s" podCreationTimestamp="2026-02-02 10:50:14 +0000 UTC" firstStartedPulling="2026-02-02 10:50:15.051592699 +0000 UTC m=+694.935785415" lastFinishedPulling="2026-02-02 10:50:18.213319313 +0000 UTC m=+698.097512029" observedRunningTime="2026-02-02 10:50:19.231380473 +0000 UTC m=+699.115573209" watchObservedRunningTime="2026-02-02 10:50:19.232572617 +0000 UTC m=+699.116765333" Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.253569 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" podStartSLOduration=1.994345738 podStartE2EDuration="5.253547999s" podCreationTimestamp="2026-02-02 10:50:14 +0000 UTC" firstStartedPulling="2026-02-02 10:50:14.950621952 +0000 UTC m=+694.834814668" lastFinishedPulling="2026-02-02 10:50:18.209824213 +0000 UTC m=+698.094016929" observedRunningTime="2026-02-02 10:50:19.250181612 +0000 UTC m=+699.134374348" watchObservedRunningTime="2026-02-02 10:50:19.253547999 +0000 UTC m=+699.137740725" Feb 02 10:50:19 crc kubenswrapper[4782]: I0202 10:50:19.310847 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jdfqk" podStartSLOduration=2.086588714 podStartE2EDuration="5.310822402s" podCreationTimestamp="2026-02-02 10:50:14 +0000 UTC" firstStartedPulling="2026-02-02 10:50:15.046870233 +0000 UTC m=+694.931062949" lastFinishedPulling="2026-02-02 10:50:18.271103921 +0000 UTC m=+698.155296637" observedRunningTime="2026-02-02 10:50:19.300530057 +0000 UTC m=+699.184722773" watchObservedRunningTime="2026-02-02 10:50:19.310822402 +0000 UTC m=+699.195015118" Feb 02 10:50:24 crc kubenswrapper[4782]: I0202 10:50:24.623312 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-9h9rr" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.354901 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prbrn"] Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357172 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-controller" containerID="cri-o://f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357286 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="northd" containerID="cri-o://189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357220 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357536 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-node" containerID="cri-o://7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357623 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-acl-logging" containerID="cri-o://540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357595 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="sbdb" containerID="cri-o://344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.357219 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="nbdb" containerID="cri-o://b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.402622 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" containerID="cri-o://81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" gracePeriod=30 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.536680 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovnkube-controller/3.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.538915 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-acl-logging/0.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539363 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-controller/0.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539699 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" exitCode=0 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539721 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" exitCode=0 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539728 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" exitCode=0 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539736 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" exitCode=143 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539744 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" exitCode=143 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539735 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539829 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539920 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.539886 4782 scope.go:117] "RemoveContainer" containerID="697e13df65c6182d51c322accad67b62474eb9c869cb328aa09bc10e419af952" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.541785 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/2.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.542304 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/1.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.542340 4782 generic.go:334] "Generic (PLEG): container finished" podID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" containerID="4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d" exitCode=2 Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.542362 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerDied","Data":"4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d"} Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.542805 4782 scope.go:117] "RemoveContainer" containerID="4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.543104 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fsqgq_openshift-multus(04d9744a-e730-45b4-9f0c-bbb5b02cd311)\"" pod="openshift-multus/multus-fsqgq" podUID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.579722 4782 scope.go:117] "RemoveContainer" containerID="b95cef2b56d3accf4543313f016af02ffe4af02c759d6f688c31f7d9749e0aad" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.735385 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-acl-logging/0.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.736187 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-controller/0.log" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.736612 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801034 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zlv8v"] Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801346 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-acl-logging" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801372 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-acl-logging" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801399 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801406 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801414 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801420 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801433 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="northd" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801441 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="northd" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801451 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="nbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801457 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="nbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801469 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="sbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801475 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="sbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801486 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801494 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801503 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-node" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801511 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-node" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801520 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801526 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801535 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801541 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801546 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801551 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801562 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kubecfg-setup" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801575 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kubecfg-setup" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801722 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-acl-logging" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801732 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801741 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801749 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="nbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801758 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="sbdb" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801768 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801775 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801783 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="northd" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801791 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="kube-rbac-proxy-node" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801797 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovn-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: E0202 10:50:39.801897 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801904 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.801997 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.802007 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerName="ovnkube-controller" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.803886 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841033 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841103 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841128 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841149 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash" (OuterVolumeSpecName: "host-slash") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841156 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841188 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841213 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841263 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841288 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841307 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841319 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841331 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841411 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841339 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket" (OuterVolumeSpecName: "log-socket") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841354 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841454 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841459 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841478 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841477 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841521 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841537 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841560 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841585 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841600 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841618 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841654 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") pod \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\" (UID: \"2642ee4e-c16a-4e6e-9654-a67666f1bff8\") " Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841841 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841870 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-config\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-env-overrides\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841932 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-script-lib\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841961 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-systemd-units\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842001 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-etc-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842037 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovn-node-metrics-cert\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842084 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842102 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-netns\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842118 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-systemd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842142 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-node-log\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842157 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-var-lib-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842174 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-slash\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842203 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-kubelet\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842216 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-netd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-ovn\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842309 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-bin\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841535 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841561 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.841843 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842161 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842362 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmqq\" (UniqueName: \"kubernetes.io/projected/2c8c681f-aeb6-4a76-ac30-9be1d209865c-kube-api-access-dlmqq\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842409 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842434 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log" (OuterVolumeSpecName: "node-log") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842463 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842538 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-log-socket\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.842571 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843588 4782 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843614 4782 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843633 4782 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843808 4782 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843838 4782 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843875 4782 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843893 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843904 4782 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843914 4782 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843923 4782 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843932 4782 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843941 4782 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843951 4782 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843960 4782 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2642ee4e-c16a-4e6e-9654-a67666f1bff8-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843970 4782 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.843979 4782 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.847775 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.847970 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt" (OuterVolumeSpecName: "kube-api-access-g8flt") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "kube-api-access-g8flt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.858130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2642ee4e-c16a-4e6e-9654-a67666f1bff8" (UID: "2642ee4e-c16a-4e6e-9654-a67666f1bff8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948093 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-systemd-units\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948192 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-etc-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948245 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovn-node-metrics-cert\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948248 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-systemd-units\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948312 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-netns\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948318 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-etc-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948375 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-netns\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948398 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948451 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948467 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-systemd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-systemd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948519 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-node-log\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-node-log\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948593 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-slash\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948633 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-var-lib-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948704 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-var-lib-openvswitch\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-slash\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948765 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-kubelet\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-kubelet\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-netd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-ovn\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948885 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-bin\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmqq\" (UniqueName: \"kubernetes.io/projected/2c8c681f-aeb6-4a76-ac30-9be1d209865c-kube-api-access-dlmqq\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-log-socket\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948943 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-netd\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949010 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-cni-bin\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949038 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-log-socket\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949020 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-config\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.948963 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-run-ovn\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949076 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-env-overrides\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949241 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-script-lib\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949362 4782 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949379 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2642ee4e-c16a-4e6e-9654-a67666f1bff8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949395 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8flt\" (UniqueName: \"kubernetes.io/projected/2642ee4e-c16a-4e6e-9654-a67666f1bff8-kube-api-access-g8flt\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949408 4782 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2642ee4e-c16a-4e6e-9654-a67666f1bff8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.949046 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c8c681f-aeb6-4a76-ac30-9be1d209865c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.950083 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-config\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.950076 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-env-overrides\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.950486 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovnkube-script-lib\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.954595 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c8c681f-aeb6-4a76-ac30-9be1d209865c-ovn-node-metrics-cert\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:39 crc kubenswrapper[4782]: I0202 10:50:39.965504 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmqq\" (UniqueName: \"kubernetes.io/projected/2c8c681f-aeb6-4a76-ac30-9be1d209865c-kube-api-access-dlmqq\") pod \"ovnkube-node-zlv8v\" (UID: \"2c8c681f-aeb6-4a76-ac30-9be1d209865c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.119324 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:40 crc kubenswrapper[4782]: W0202 10:50:40.138868 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c8c681f_aeb6_4a76_ac30_9be1d209865c.slice/crio-e4838ceb038c678bd934ea1865cc87e697b80fc8ab172e82b37011f28f99c5eb WatchSource:0}: Error finding container e4838ceb038c678bd934ea1865cc87e697b80fc8ab172e82b37011f28f99c5eb: Status 404 returned error can't find the container with id e4838ceb038c678bd934ea1865cc87e697b80fc8ab172e82b37011f28f99c5eb Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.548297 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/2.log" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.550527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"e4838ceb038c678bd934ea1865cc87e697b80fc8ab172e82b37011f28f99c5eb"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.553613 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-acl-logging/0.log" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554022 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-prbrn_2642ee4e-c16a-4e6e-9654-a67666f1bff8/ovn-controller/0.log" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554319 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" exitCode=0 Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554346 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" exitCode=0 Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554355 4782 generic.go:334] "Generic (PLEG): container finished" podID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" exitCode=0 Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554375 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554393 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554402 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554411 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" event={"ID":"2642ee4e-c16a-4e6e-9654-a67666f1bff8","Type":"ContainerDied","Data":"db6a9af8a980d743bf0b991e52f1aa50a4a04f4b9f2306a972866beef0456ce6"} Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554426 4782 scope.go:117] "RemoveContainer" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.554564 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-prbrn" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.574152 4782 scope.go:117] "RemoveContainer" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.590832 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prbrn"] Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.596019 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-prbrn"] Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.603534 4782 scope.go:117] "RemoveContainer" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.622824 4782 scope.go:117] "RemoveContainer" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.635829 4782 scope.go:117] "RemoveContainer" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.647170 4782 scope.go:117] "RemoveContainer" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.659960 4782 scope.go:117] "RemoveContainer" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.672981 4782 scope.go:117] "RemoveContainer" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.685692 4782 scope.go:117] "RemoveContainer" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.703563 4782 scope.go:117] "RemoveContainer" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.704470 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": container with ID starting with 81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b not found: ID does not exist" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.704524 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} err="failed to get container status \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": rpc error: code = NotFound desc = could not find container \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": container with ID starting with 81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.704552 4782 scope.go:117] "RemoveContainer" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.705143 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": container with ID starting with 344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3 not found: ID does not exist" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705192 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} err="failed to get container status \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": rpc error: code = NotFound desc = could not find container \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": container with ID starting with 344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705205 4782 scope.go:117] "RemoveContainer" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.705512 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": container with ID starting with b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee not found: ID does not exist" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705533 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} err="failed to get container status \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": rpc error: code = NotFound desc = could not find container \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": container with ID starting with b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705572 4782 scope.go:117] "RemoveContainer" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.705933 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": container with ID starting with 189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f not found: ID does not exist" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705977 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} err="failed to get container status \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": rpc error: code = NotFound desc = could not find container \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": container with ID starting with 189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.705993 4782 scope.go:117] "RemoveContainer" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.706258 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": container with ID starting with 371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b not found: ID does not exist" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706282 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} err="failed to get container status \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": rpc error: code = NotFound desc = could not find container \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": container with ID starting with 371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706325 4782 scope.go:117] "RemoveContainer" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.706537 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": container with ID starting with 7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a not found: ID does not exist" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706579 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} err="failed to get container status \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": rpc error: code = NotFound desc = could not find container \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": container with ID starting with 7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706603 4782 scope.go:117] "RemoveContainer" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.706916 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": container with ID starting with 540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac not found: ID does not exist" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706933 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} err="failed to get container status \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": rpc error: code = NotFound desc = could not find container \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": container with ID starting with 540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.706964 4782 scope.go:117] "RemoveContainer" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.707173 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": container with ID starting with f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150 not found: ID does not exist" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707192 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} err="failed to get container status \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": rpc error: code = NotFound desc = could not find container \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": container with ID starting with f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707203 4782 scope.go:117] "RemoveContainer" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: E0202 10:50:40.707483 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": container with ID starting with c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343 not found: ID does not exist" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707500 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343"} err="failed to get container status \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": rpc error: code = NotFound desc = could not find container \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": container with ID starting with c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707511 4782 scope.go:117] "RemoveContainer" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707781 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} err="failed to get container status \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": rpc error: code = NotFound desc = could not find container \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": container with ID starting with 81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.707796 4782 scope.go:117] "RemoveContainer" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708001 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} err="failed to get container status \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": rpc error: code = NotFound desc = could not find container \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": container with ID starting with 344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708127 4782 scope.go:117] "RemoveContainer" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708504 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} err="failed to get container status \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": rpc error: code = NotFound desc = could not find container \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": container with ID starting with b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708520 4782 scope.go:117] "RemoveContainer" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708878 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} err="failed to get container status \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": rpc error: code = NotFound desc = could not find container \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": container with ID starting with 189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.708915 4782 scope.go:117] "RemoveContainer" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709123 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} err="failed to get container status \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": rpc error: code = NotFound desc = could not find container \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": container with ID starting with 371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709209 4782 scope.go:117] "RemoveContainer" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709511 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} err="failed to get container status \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": rpc error: code = NotFound desc = could not find container \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": container with ID starting with 7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709593 4782 scope.go:117] "RemoveContainer" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709869 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} err="failed to get container status \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": rpc error: code = NotFound desc = could not find container \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": container with ID starting with 540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.709910 4782 scope.go:117] "RemoveContainer" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710160 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} err="failed to get container status \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": rpc error: code = NotFound desc = could not find container \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": container with ID starting with f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710243 4782 scope.go:117] "RemoveContainer" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710507 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343"} err="failed to get container status \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": rpc error: code = NotFound desc = could not find container \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": container with ID starting with c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710531 4782 scope.go:117] "RemoveContainer" containerID="81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710811 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b"} err="failed to get container status \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": rpc error: code = NotFound desc = could not find container \"81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b\": container with ID starting with 81049d5e41dffab57f45208f4ffca5c6ef978d399f1eb8cf944ec8e64e71bc5b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.710837 4782 scope.go:117] "RemoveContainer" containerID="344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711105 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3"} err="failed to get container status \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": rpc error: code = NotFound desc = could not find container \"344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3\": container with ID starting with 344eacf0d6239238cbf13b94f80d88a36589057a177a0f7a3d5629f7975c02d3 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711183 4782 scope.go:117] "RemoveContainer" containerID="b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711614 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee"} err="failed to get container status \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": rpc error: code = NotFound desc = could not find container \"b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee\": container with ID starting with b6886dd3d4ca2fac733bc3cc105d17aff972828ba91abf5373769f272e5454ee not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711666 4782 scope.go:117] "RemoveContainer" containerID="189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711881 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f"} err="failed to get container status \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": rpc error: code = NotFound desc = could not find container \"189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f\": container with ID starting with 189c170f233964d95aebd085e016304a66076971b44f050d41d755305d11743f not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.711947 4782 scope.go:117] "RemoveContainer" containerID="371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.712345 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b"} err="failed to get container status \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": rpc error: code = NotFound desc = could not find container \"371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b\": container with ID starting with 371a331afaa5bd7d90cd98ba3363865d54d8cac621ba37a40e51f0cddb4eb02b not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.712360 4782 scope.go:117] "RemoveContainer" containerID="7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.712618 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a"} err="failed to get container status \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": rpc error: code = NotFound desc = could not find container \"7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a\": container with ID starting with 7728a9bcd84ff5e2fad4910e321a0461ab043b453efe05ddc36d018dba38315a not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.712672 4782 scope.go:117] "RemoveContainer" containerID="540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.713355 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac"} err="failed to get container status \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": rpc error: code = NotFound desc = could not find container \"540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac\": container with ID starting with 540e6587db608dea5be7483bfc3a5c4d44da51ecf3e9bfe12550aa8bf5a74fac not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.713443 4782 scope.go:117] "RemoveContainer" containerID="f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.713937 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150"} err="failed to get container status \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": rpc error: code = NotFound desc = could not find container \"f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150\": container with ID starting with f7645944a876f5f8139960a66e91d19644df23b8bfbdb64c9d14eec7298ac150 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.713985 4782 scope.go:117] "RemoveContainer" containerID="c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.714478 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343"} err="failed to get container status \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": rpc error: code = NotFound desc = could not find container \"c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343\": container with ID starting with c9fd886f1d358a2e18e02a8761c3ad75fb5a9ce2d67cd5760bb3dabc31df2343 not found: ID does not exist" Feb 02 10:50:40 crc kubenswrapper[4782]: I0202 10:50:40.828094 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2642ee4e-c16a-4e6e-9654-a67666f1bff8" path="/var/lib/kubelet/pods/2642ee4e-c16a-4e6e-9654-a67666f1bff8/volumes" Feb 02 10:50:41 crc kubenswrapper[4782]: I0202 10:50:41.561591 4782 generic.go:334] "Generic (PLEG): container finished" podID="2c8c681f-aeb6-4a76-ac30-9be1d209865c" containerID="5ce490632e0e45fd9754c7134d7ed0e71a0d338a0cf7b4881b6d2561654a0c06" exitCode=0 Feb 02 10:50:41 crc kubenswrapper[4782]: I0202 10:50:41.562116 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerDied","Data":"5ce490632e0e45fd9754c7134d7ed0e71a0d338a0cf7b4881b6d2561654a0c06"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570334 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"aac2f263520be40956e4a6ea16a75574100028c7646dafc0b19277dc0ec03cd0"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"7343456d655a488d530594b46b16922b6875c61d12de8cf3cf349fffb8a151aa"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"f2bfd3065549730109c035db682820c5a4ab2a5beba6316120a466df0e63896f"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570699 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"5abcf4333986001688fc89c1cfa5270cd27600afc7c104877fd9352aa945c10a"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570707 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"884bb97ccd4cf2ced83d472174c0609fa08c31d64d6d89e21a927903b25e59eb"} Feb 02 10:50:42 crc kubenswrapper[4782]: I0202 10:50:42.570716 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"53aeaf82f022d482ed01e2e5a8f28a4d5c73360f84f6dd8469caa5c7683a0e7e"} Feb 02 10:50:45 crc kubenswrapper[4782]: I0202 10:50:45.592797 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"5756d801fb9c220d67548519b3924fcd0c55d6119c0c928303ef3b85ce7bcc14"} Feb 02 10:50:47 crc kubenswrapper[4782]: I0202 10:50:47.607786 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" event={"ID":"2c8c681f-aeb6-4a76-ac30-9be1d209865c","Type":"ContainerStarted","Data":"1eecee32e70d66f7c75c72f0a87c431487d944eed52f36fe31af53a38518ddf7"} Feb 02 10:50:47 crc kubenswrapper[4782]: I0202 10:50:47.608225 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:47 crc kubenswrapper[4782]: I0202 10:50:47.649786 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:47 crc kubenswrapper[4782]: I0202 10:50:47.681316 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" podStartSLOduration=8.681290323 podStartE2EDuration="8.681290323s" podCreationTimestamp="2026-02-02 10:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:50:47.64485594 +0000 UTC m=+727.529048656" watchObservedRunningTime="2026-02-02 10:50:47.681290323 +0000 UTC m=+727.565483059" Feb 02 10:50:48 crc kubenswrapper[4782]: I0202 10:50:48.612927 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:48 crc kubenswrapper[4782]: I0202 10:50:48.613291 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:48 crc kubenswrapper[4782]: I0202 10:50:48.646446 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:50:53 crc kubenswrapper[4782]: I0202 10:50:53.820972 4782 scope.go:117] "RemoveContainer" containerID="4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d" Feb 02 10:50:53 crc kubenswrapper[4782]: E0202 10:50:53.821936 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fsqgq_openshift-multus(04d9744a-e730-45b4-9f0c-bbb5b02cd311)\"" pod="openshift-multus/multus-fsqgq" podUID="04d9744a-e730-45b4-9f0c-bbb5b02cd311" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.917146 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq"] Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.918580 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.920274 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.926595 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.926629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.926792 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:01 crc kubenswrapper[4782]: I0202 10:51:01.930581 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq"] Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.027964 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.028020 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.028036 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.028530 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.028711 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.046250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.235877 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.263333 4782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(43d2604d5801646e2768a60b82a4271be395b500e931b67164b46554c5edc66d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.263417 4782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(43d2604d5801646e2768a60b82a4271be395b500e931b67164b46554c5edc66d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.263437 4782 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(43d2604d5801646e2768a60b82a4271be395b500e931b67164b46554c5edc66d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.263474 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace(c86f666c-8701-45f8-a488-85b4052a02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace(c86f666c-8701-45f8-a488-85b4052a02db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(43d2604d5801646e2768a60b82a4271be395b500e931b67164b46554c5edc66d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" podUID="c86f666c-8701-45f8-a488-85b4052a02db" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.703912 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: I0202 10:51:02.704619 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.723817 4782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(f6d321dad0ec615e0a46fbd3ba18cce981142e98586d4612f492b03bb2e66d01): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.723879 4782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(f6d321dad0ec615e0a46fbd3ba18cce981142e98586d4612f492b03bb2e66d01): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.723910 4782 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(f6d321dad0ec615e0a46fbd3ba18cce981142e98586d4612f492b03bb2e66d01): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:02 crc kubenswrapper[4782]: E0202 10:51:02.723957 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace(c86f666c-8701-45f8-a488-85b4052a02db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace(c86f666c-8701-45f8-a488-85b4052a02db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_openshift-marketplace_c86f666c-8701-45f8-a488-85b4052a02db_0(f6d321dad0ec615e0a46fbd3ba18cce981142e98586d4612f492b03bb2e66d01): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" podUID="c86f666c-8701-45f8-a488-85b4052a02db" Feb 02 10:51:07 crc kubenswrapper[4782]: I0202 10:51:07.820998 4782 scope.go:117] "RemoveContainer" containerID="4fde6ad054eb082a082a2907b3951afa7c993e3cd3c0464f51b2ceec9802143d" Feb 02 10:51:08 crc kubenswrapper[4782]: I0202 10:51:08.739038 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsqgq_04d9744a-e730-45b4-9f0c-bbb5b02cd311/kube-multus/2.log" Feb 02 10:51:08 crc kubenswrapper[4782]: I0202 10:51:08.739450 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsqgq" event={"ID":"04d9744a-e730-45b4-9f0c-bbb5b02cd311","Type":"ContainerStarted","Data":"ffd35c81492028424ba964f22ddd18326ce64e1a4f31005f5449e7599e8c0b1e"} Feb 02 10:51:10 crc kubenswrapper[4782]: I0202 10:51:10.141330 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zlv8v" Feb 02 10:51:13 crc kubenswrapper[4782]: I0202 10:51:13.820438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:13 crc kubenswrapper[4782]: I0202 10:51:13.821034 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:13 crc kubenswrapper[4782]: I0202 10:51:13.995017 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq"] Feb 02 10:51:14 crc kubenswrapper[4782]: W0202 10:51:14.002863 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc86f666c_8701_45f8_a488_85b4052a02db.slice/crio-36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38 WatchSource:0}: Error finding container 36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38: Status 404 returned error can't find the container with id 36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38 Feb 02 10:51:14 crc kubenswrapper[4782]: I0202 10:51:14.783758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerStarted","Data":"ba6e55759dbdc6b8180045454740c177751f8882ce6ee7422fdeb19d17838ef2"} Feb 02 10:51:14 crc kubenswrapper[4782]: I0202 10:51:14.783805 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerStarted","Data":"36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38"} Feb 02 10:51:15 crc kubenswrapper[4782]: I0202 10:51:15.790780 4782 generic.go:334] "Generic (PLEG): container finished" podID="c86f666c-8701-45f8-a488-85b4052a02db" containerID="ba6e55759dbdc6b8180045454740c177751f8882ce6ee7422fdeb19d17838ef2" exitCode=0 Feb 02 10:51:15 crc kubenswrapper[4782]: I0202 10:51:15.790849 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerDied","Data":"ba6e55759dbdc6b8180045454740c177751f8882ce6ee7422fdeb19d17838ef2"} Feb 02 10:51:17 crc kubenswrapper[4782]: I0202 10:51:17.806960 4782 generic.go:334] "Generic (PLEG): container finished" podID="c86f666c-8701-45f8-a488-85b4052a02db" containerID="629031d46e3e0518550e214f208634a173a18e028d11714a15d92236ab28b3b2" exitCode=0 Feb 02 10:51:17 crc kubenswrapper[4782]: I0202 10:51:17.807088 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerDied","Data":"629031d46e3e0518550e214f208634a173a18e028d11714a15d92236ab28b3b2"} Feb 02 10:51:18 crc kubenswrapper[4782]: I0202 10:51:18.818409 4782 generic.go:334] "Generic (PLEG): container finished" podID="c86f666c-8701-45f8-a488-85b4052a02db" containerID="f92f038867a3ca5b5c1ca0c6dfe77d3d8810d5279cc2137514daf33b95ebb100" exitCode=0 Feb 02 10:51:18 crc kubenswrapper[4782]: I0202 10:51:18.818490 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerDied","Data":"f92f038867a3ca5b5c1ca0c6dfe77d3d8810d5279cc2137514daf33b95ebb100"} Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.050216 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.156002 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") pod \"c86f666c-8701-45f8-a488-85b4052a02db\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.156043 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") pod \"c86f666c-8701-45f8-a488-85b4052a02db\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.156070 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") pod \"c86f666c-8701-45f8-a488-85b4052a02db\" (UID: \"c86f666c-8701-45f8-a488-85b4052a02db\") " Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.157612 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle" (OuterVolumeSpecName: "bundle") pod "c86f666c-8701-45f8-a488-85b4052a02db" (UID: "c86f666c-8701-45f8-a488-85b4052a02db"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.164439 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8" (OuterVolumeSpecName: "kube-api-access-jshv8") pod "c86f666c-8701-45f8-a488-85b4052a02db" (UID: "c86f666c-8701-45f8-a488-85b4052a02db"). InnerVolumeSpecName "kube-api-access-jshv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.177268 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util" (OuterVolumeSpecName: "util") pod "c86f666c-8701-45f8-a488-85b4052a02db" (UID: "c86f666c-8701-45f8-a488-85b4052a02db"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.257208 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.257241 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jshv8\" (UniqueName: \"kubernetes.io/projected/c86f666c-8701-45f8-a488-85b4052a02db-kube-api-access-jshv8\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.257253 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c86f666c-8701-45f8-a488-85b4052a02db-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.833383 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" event={"ID":"c86f666c-8701-45f8-a488-85b4052a02db","Type":"ContainerDied","Data":"36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38"} Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.833423 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b1e2f62b968997e86c599d96f46bcb33a3f8ae9a0f128c3f48a2d60d564e38" Feb 02 10:51:20 crc kubenswrapper[4782]: I0202 10:51:20.833496 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.560590 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pfjs6"] Feb 02 10:51:23 crc kubenswrapper[4782]: E0202 10:51:23.561064 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="extract" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561077 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="extract" Feb 02 10:51:23 crc kubenswrapper[4782]: E0202 10:51:23.561087 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="util" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561093 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="util" Feb 02 10:51:23 crc kubenswrapper[4782]: E0202 10:51:23.561105 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="pull" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561111 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="pull" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561199 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86f666c-8701-45f8-a488-85b4052a02db" containerName="extract" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.561576 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.564728 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.564828 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.565091 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-gfhmk" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.583413 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pfjs6"] Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.592828 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xwb\" (UniqueName: \"kubernetes.io/projected/371da653-9a38-424f-9069-14e251c45e1b-kube-api-access-r8xwb\") pod \"nmstate-operator-646758c888-pfjs6\" (UID: \"371da653-9a38-424f-9069-14e251c45e1b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.694374 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xwb\" (UniqueName: \"kubernetes.io/projected/371da653-9a38-424f-9069-14e251c45e1b-kube-api-access-r8xwb\") pod \"nmstate-operator-646758c888-pfjs6\" (UID: \"371da653-9a38-424f-9069-14e251c45e1b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.719652 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xwb\" (UniqueName: \"kubernetes.io/projected/371da653-9a38-424f-9069-14e251c45e1b-kube-api-access-r8xwb\") pod \"nmstate-operator-646758c888-pfjs6\" (UID: \"371da653-9a38-424f-9069-14e251c45e1b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:23 crc kubenswrapper[4782]: I0202 10:51:23.875382 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" Feb 02 10:51:24 crc kubenswrapper[4782]: I0202 10:51:24.100057 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pfjs6"] Feb 02 10:51:24 crc kubenswrapper[4782]: I0202 10:51:24.857009 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" event={"ID":"371da653-9a38-424f-9069-14e251c45e1b","Type":"ContainerStarted","Data":"0f44dd75a2ed5b556d822e41b4a5da8f95665d539bdd870f6fbb7e6dcd51265b"} Feb 02 10:51:26 crc kubenswrapper[4782]: I0202 10:51:26.872598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" event={"ID":"371da653-9a38-424f-9069-14e251c45e1b","Type":"ContainerStarted","Data":"762c53de67f3c15bcb882523e6fea111f9c62a41cff007eb7a56edcc79553d3c"} Feb 02 10:51:26 crc kubenswrapper[4782]: I0202 10:51:26.891416 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-pfjs6" podStartSLOduration=1.649359283 podStartE2EDuration="3.891397749s" podCreationTimestamp="2026-02-02 10:51:23 +0000 UTC" firstStartedPulling="2026-02-02 10:51:24.105750175 +0000 UTC m=+763.989942891" lastFinishedPulling="2026-02-02 10:51:26.347788651 +0000 UTC m=+766.231981357" observedRunningTime="2026-02-02 10:51:26.88968543 +0000 UTC m=+766.773878146" watchObservedRunningTime="2026-02-02 10:51:26.891397749 +0000 UTC m=+766.775590465" Feb 02 10:51:31 crc kubenswrapper[4782]: I0202 10:51:31.645824 4782 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.335205 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djhxz"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.336373 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.341791 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jd8tl" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.343677 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.344679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.346401 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.358491 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.364397 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djhxz"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.397435 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wjctm"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.398796 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514468 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4dm\" (UniqueName: \"kubernetes.io/projected/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-kube-api-access-vs4dm\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514586 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-dbus-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvwh\" (UniqueName: \"kubernetes.io/projected/a30862c2-daa1-42d6-8815-aabc8387e789-kube-api-access-ltvwh\") pod \"nmstate-metrics-54757c584b-djhxz\" (UID: \"a30862c2-daa1-42d6-8815-aabc8387e789\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514749 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m98t\" (UniqueName: \"kubernetes.io/projected/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-kube-api-access-7m98t\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514792 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-nmstate-lock\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514866 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-ovs-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.514912 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.528129 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.528957 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.531322 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rrkgq" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.531322 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.531789 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.571485 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.615996 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-ovs-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616060 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8rk\" (UniqueName: \"kubernetes.io/projected/00048f8e-9669-413d-b215-6a787d5270c0-kube-api-access-wz8rk\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616094 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00048f8e-9669-413d-b215-6a787d5270c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616155 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4dm\" (UniqueName: \"kubernetes.io/projected/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-kube-api-access-vs4dm\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616183 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-dbus-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616234 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvwh\" (UniqueName: \"kubernetes.io/projected/a30862c2-daa1-42d6-8815-aabc8387e789-kube-api-access-ltvwh\") pod \"nmstate-metrics-54757c584b-djhxz\" (UID: \"a30862c2-daa1-42d6-8815-aabc8387e789\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00048f8e-9669-413d-b215-6a787d5270c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m98t\" (UniqueName: \"kubernetes.io/projected/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-kube-api-access-7m98t\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-nmstate-lock\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616396 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-nmstate-lock\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.616478 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-ovs-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: E0202 10:51:32.616572 4782 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 10:51:32 crc kubenswrapper[4782]: E0202 10:51:32.616650 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair podName:cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a nodeName:}" failed. No retries permitted until 2026-02-02 10:51:33.116602271 +0000 UTC m=+773.000794987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-jpc2k" (UID: "cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a") : secret "openshift-nmstate-webhook" not found Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.617270 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-dbus-socket\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.637704 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m98t\" (UniqueName: \"kubernetes.io/projected/3cf88c2a-32c2-4bd3-8832-b480fbfd1afe-kube-api-access-7m98t\") pod \"nmstate-handler-wjctm\" (UID: \"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe\") " pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.659462 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4dm\" (UniqueName: \"kubernetes.io/projected/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-kube-api-access-vs4dm\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.664394 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvwh\" (UniqueName: \"kubernetes.io/projected/a30862c2-daa1-42d6-8815-aabc8387e789-kube-api-access-ltvwh\") pod \"nmstate-metrics-54757c584b-djhxz\" (UID: \"a30862c2-daa1-42d6-8815-aabc8387e789\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.668590 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.716809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz8rk\" (UniqueName: \"kubernetes.io/projected/00048f8e-9669-413d-b215-6a787d5270c0-kube-api-access-wz8rk\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.716853 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00048f8e-9669-413d-b215-6a787d5270c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.716918 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00048f8e-9669-413d-b215-6a787d5270c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.718133 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/00048f8e-9669-413d-b215-6a787d5270c0-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.720381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/00048f8e-9669-413d-b215-6a787d5270c0-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.729184 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.741246 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz8rk\" (UniqueName: \"kubernetes.io/projected/00048f8e-9669-413d-b215-6a787d5270c0-kube-api-access-wz8rk\") pod \"nmstate-console-plugin-7754f76f8b-5zmc7\" (UID: \"00048f8e-9669-413d-b215-6a787d5270c0\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.772994 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-595664cbc7-qhdgt"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.773763 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.796577 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595664cbc7-qhdgt"] Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.817935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-oauth-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.817990 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-service-ca\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818020 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8kg\" (UniqueName: \"kubernetes.io/projected/7cbf3206-6442-45fd-a75d-3d47f579b2f7-kube-api-access-8q8kg\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818078 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818131 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-oauth-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.818153 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-trusted-ca-bundle\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.847308 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918805 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wjctm" event={"ID":"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe","Type":"ContainerStarted","Data":"a71dcffdeb4c13a3e0e1af77aade9350bdb5e901d02c9c27aef428120219f775"} Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-service-ca\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918868 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8kg\" (UniqueName: \"kubernetes.io/projected/7cbf3206-6442-45fd-a75d-3d47f579b2f7-kube-api-access-8q8kg\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918918 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918959 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-oauth-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.918975 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-trusted-ca-bundle\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.919007 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-oauth-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.919758 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-service-ca\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.921235 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.921718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-trusted-ca-bundle\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.922243 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cbf3206-6442-45fd-a75d-3d47f579b2f7-oauth-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.924791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-serving-cert\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.925573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cbf3206-6442-45fd-a75d-3d47f579b2f7-console-oauth-config\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.947682 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8kg\" (UniqueName: \"kubernetes.io/projected/7cbf3206-6442-45fd-a75d-3d47f579b2f7-kube-api-access-8q8kg\") pod \"console-595664cbc7-qhdgt\" (UID: \"7cbf3206-6442-45fd-a75d-3d47f579b2f7\") " pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:32 crc kubenswrapper[4782]: I0202 10:51:32.993095 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-djhxz"] Feb 02 10:51:32 crc kubenswrapper[4782]: W0202 10:51:32.998185 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda30862c2_daa1_42d6_8815_aabc8387e789.slice/crio-71a5f8b358766d118b5aac82d8bcde713b4175a9c0fcc5d016b5234877b8cc1c WatchSource:0}: Error finding container 71a5f8b358766d118b5aac82d8bcde713b4175a9c0fcc5d016b5234877b8cc1c: Status 404 returned error can't find the container with id 71a5f8b358766d118b5aac82d8bcde713b4175a9c0fcc5d016b5234877b8cc1c Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.092597 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.120813 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.124680 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jpc2k\" (UID: \"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.261255 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595664cbc7-qhdgt"] Feb 02 10:51:33 crc kubenswrapper[4782]: W0202 10:51:33.267766 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cbf3206_6442_45fd_a75d_3d47f579b2f7.slice/crio-de54ab543e77af4d8099f55bbc661ee9471c8d6ee4ed277179ae5459361f9bc6 WatchSource:0}: Error finding container de54ab543e77af4d8099f55bbc661ee9471c8d6ee4ed277179ae5459361f9bc6: Status 404 returned error can't find the container with id de54ab543e77af4d8099f55bbc661ee9471c8d6ee4ed277179ae5459361f9bc6 Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.280277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.303177 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7"] Feb 02 10:51:33 crc kubenswrapper[4782]: W0202 10:51:33.324292 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00048f8e_9669_413d_b215_6a787d5270c0.slice/crio-fed3a1813de494bf11a79edd6889849ec4a50320e8231abb042467d1e2a2f570 WatchSource:0}: Error finding container fed3a1813de494bf11a79edd6889849ec4a50320e8231abb042467d1e2a2f570: Status 404 returned error can't find the container with id fed3a1813de494bf11a79edd6889849ec4a50320e8231abb042467d1e2a2f570 Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.477766 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k"] Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.924477 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" event={"ID":"00048f8e-9669-413d-b215-6a787d5270c0","Type":"ContainerStarted","Data":"fed3a1813de494bf11a79edd6889849ec4a50320e8231abb042467d1e2a2f570"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.925506 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" event={"ID":"a30862c2-daa1-42d6-8815-aabc8387e789","Type":"ContainerStarted","Data":"71a5f8b358766d118b5aac82d8bcde713b4175a9c0fcc5d016b5234877b8cc1c"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.927526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595664cbc7-qhdgt" event={"ID":"7cbf3206-6442-45fd-a75d-3d47f579b2f7","Type":"ContainerStarted","Data":"e52b9f87e0d0b14037a16d805a0678c32be7d06872e8d1b75c34a6183d08595d"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.927573 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595664cbc7-qhdgt" event={"ID":"7cbf3206-6442-45fd-a75d-3d47f579b2f7","Type":"ContainerStarted","Data":"de54ab543e77af4d8099f55bbc661ee9471c8d6ee4ed277179ae5459361f9bc6"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.928527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" event={"ID":"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a","Type":"ContainerStarted","Data":"436d0675473f76cfa68423a2c317248aedd4d06aca69b5fdd653c1d1a7cf4a9b"} Feb 02 10:51:33 crc kubenswrapper[4782]: I0202 10:51:33.954252 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-595664cbc7-qhdgt" podStartSLOduration=1.954235054 podStartE2EDuration="1.954235054s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:51:33.951535756 +0000 UTC m=+773.835728472" watchObservedRunningTime="2026-02-02 10:51:33.954235054 +0000 UTC m=+773.838427800" Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.957570 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" event={"ID":"00048f8e-9669-413d-b215-6a787d5270c0","Type":"ContainerStarted","Data":"6f0690fcc12f010bcece1a95166689701dd5002a69db0b41dc27fd99226f8a8d"} Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.960314 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wjctm" event={"ID":"3cf88c2a-32c2-4bd3-8832-b480fbfd1afe","Type":"ContainerStarted","Data":"e7123dfee0613507431c34aae9d14d2379c0940c74a78ca7a73bae106eee75d0"} Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.960446 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.962119 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" event={"ID":"a30862c2-daa1-42d6-8815-aabc8387e789","Type":"ContainerStarted","Data":"de01a463285691edddbbd2764b18eb42839959c6a2ba12dc542863b806587d72"} Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.963354 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" event={"ID":"cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a","Type":"ContainerStarted","Data":"989eb95748fcb05882a334056d6b25c40bc9071e9b98fe2d87c90bbd091ae8b2"} Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.963889 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.980782 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zmc7" podStartSLOduration=2.435103086 podStartE2EDuration="4.980764811s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="2026-02-02 10:51:33.327241229 +0000 UTC m=+773.211433935" lastFinishedPulling="2026-02-02 10:51:35.872902944 +0000 UTC m=+775.757095660" observedRunningTime="2026-02-02 10:51:36.970454266 +0000 UTC m=+776.854646982" watchObservedRunningTime="2026-02-02 10:51:36.980764811 +0000 UTC m=+776.864957527" Feb 02 10:51:36 crc kubenswrapper[4782]: I0202 10:51:36.995689 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" podStartSLOduration=2.576814422 podStartE2EDuration="4.995665838s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="2026-02-02 10:51:33.48624138 +0000 UTC m=+773.370434096" lastFinishedPulling="2026-02-02 10:51:35.905092796 +0000 UTC m=+775.789285512" observedRunningTime="2026-02-02 10:51:36.993586088 +0000 UTC m=+776.877778804" watchObservedRunningTime="2026-02-02 10:51:36.995665838 +0000 UTC m=+776.879858554" Feb 02 10:51:37 crc kubenswrapper[4782]: I0202 10:51:37.015873 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wjctm" podStartSLOduration=1.922558996 podStartE2EDuration="5.015858565s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="2026-02-02 10:51:32.784013882 +0000 UTC m=+772.668206598" lastFinishedPulling="2026-02-02 10:51:35.877313451 +0000 UTC m=+775.761506167" observedRunningTime="2026-02-02 10:51:37.012444288 +0000 UTC m=+776.896637004" watchObservedRunningTime="2026-02-02 10:51:37.015858565 +0000 UTC m=+776.900051281" Feb 02 10:51:38 crc kubenswrapper[4782]: I0202 10:51:38.979658 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" event={"ID":"a30862c2-daa1-42d6-8815-aabc8387e789","Type":"ContainerStarted","Data":"df8a5c5a46740e81ef06ab89c4d49fe2c539d350c88951e66d8be6ed4c08a9c5"} Feb 02 10:51:38 crc kubenswrapper[4782]: I0202 10:51:38.995491 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-djhxz" podStartSLOduration=1.5740391919999999 podStartE2EDuration="6.995472271s" podCreationTimestamp="2026-02-02 10:51:32 +0000 UTC" firstStartedPulling="2026-02-02 10:51:33.000609851 +0000 UTC m=+772.884802577" lastFinishedPulling="2026-02-02 10:51:38.42204294 +0000 UTC m=+778.306235656" observedRunningTime="2026-02-02 10:51:38.993274858 +0000 UTC m=+778.877467604" watchObservedRunningTime="2026-02-02 10:51:38.995472271 +0000 UTC m=+778.879664987" Feb 02 10:51:42 crc kubenswrapper[4782]: I0202 10:51:42.765741 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wjctm" Feb 02 10:51:43 crc kubenswrapper[4782]: I0202 10:51:43.093562 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:43 crc kubenswrapper[4782]: I0202 10:51:43.093836 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:43 crc kubenswrapper[4782]: I0202 10:51:43.098183 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:44 crc kubenswrapper[4782]: I0202 10:51:44.008724 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-595664cbc7-qhdgt" Feb 02 10:51:44 crc kubenswrapper[4782]: I0202 10:51:44.088988 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:51:52 crc kubenswrapper[4782]: I0202 10:51:52.951504 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:51:52 crc kubenswrapper[4782]: I0202 10:51:52.951823 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:51:53 crc kubenswrapper[4782]: I0202 10:51:53.290047 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jpc2k" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.008728 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6"] Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.010555 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.012393 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.027518 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6"] Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.067409 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.067455 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.067481 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168323 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168448 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168485 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168975 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.168998 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.188676 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.327125 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:06 crc kubenswrapper[4782]: I0202 10:52:06.517606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6"] Feb 02 10:52:07 crc kubenswrapper[4782]: I0202 10:52:07.141814 4782 generic.go:334] "Generic (PLEG): container finished" podID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerID="0469cc9b81c04188639a62db67580dcb6eff0a9ec2ce428ea2ea4d74ade63f63" exitCode=0 Feb 02 10:52:07 crc kubenswrapper[4782]: I0202 10:52:07.141858 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerDied","Data":"0469cc9b81c04188639a62db67580dcb6eff0a9ec2ce428ea2ea4d74ade63f63"} Feb 02 10:52:07 crc kubenswrapper[4782]: I0202 10:52:07.141884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerStarted","Data":"f7fc72918ffce23fb6b318f086b25ec3564f4c9b6e4fd0bad204d7e25d878f2f"} Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.133300 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sf9m8" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" containerID="cri-o://8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" gracePeriod=15 Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.156726 4782 generic.go:334] "Generic (PLEG): container finished" podID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerID="a8f8070e7407c219db31a43339188edbfa511a91d6df0ee046c8e116c7be5f24" exitCode=0 Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.156784 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerDied","Data":"a8f8070e7407c219db31a43339188edbfa511a91d6df0ee046c8e116c7be5f24"} Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.530594 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sf9m8_76afda26-696c-4996-bc58-1c928e4fa92a/console/0.log" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.530691 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.582785 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:09 crc kubenswrapper[4782]: E0202 10:52:09.583149 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.583162 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.583266 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" containerName="console" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.584125 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.588257 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615085 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615141 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615219 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615273 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615296 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615318 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615390 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") pod \"76afda26-696c-4996-bc58-1c928e4fa92a\" (UID: \"76afda26-696c-4996-bc58-1c928e4fa92a\") " Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615586 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.615710 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.616026 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.616063 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca" (OuterVolumeSpecName: "service-ca") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.616321 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config" (OuterVolumeSpecName: "console-config") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.616578 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.623941 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5" (OuterVolumeSpecName: "kube-api-access-bklv5") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "kube-api-access-bklv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.636754 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.640101 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "76afda26-696c-4996-bc58-1c928e4fa92a" (UID: "76afda26-696c-4996-bc58-1c928e4fa92a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.716904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717436 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717538 4782 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717557 4782 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717543 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717569 4782 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717611 4782 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717623 4782 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/76afda26-696c-4996-bc58-1c928e4fa92a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717658 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bklv5\" (UniqueName: \"kubernetes.io/projected/76afda26-696c-4996-bc58-1c928e4fa92a-kube-api-access-bklv5\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717675 4782 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/76afda26-696c-4996-bc58-1c928e4fa92a-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.717968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.745950 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") pod \"redhat-operators-xnj2n\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:09 crc kubenswrapper[4782]: I0202 10:52:09.902206 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.143887 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.166313 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerStarted","Data":"76b887cf6cdadf5ce40b5ae85a2c309ce7496129ec1523601bbfaf633d9f7f8d"} Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170734 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sf9m8_76afda26-696c-4996-bc58-1c928e4fa92a/console/0.log" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170815 4782 generic.go:334] "Generic (PLEG): container finished" podID="76afda26-696c-4996-bc58-1c928e4fa92a" containerID="8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" exitCode=2 Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170846 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sf9m8" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sf9m8" event={"ID":"76afda26-696c-4996-bc58-1c928e4fa92a","Type":"ContainerDied","Data":"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee"} Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170941 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sf9m8" event={"ID":"76afda26-696c-4996-bc58-1c928e4fa92a","Type":"ContainerDied","Data":"a512fcceae6cfaeaad197794b5b6c708f15cf79898b3102381c39333768e348a"} Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.170963 4782 scope.go:117] "RemoveContainer" containerID="8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.182329 4782 generic.go:334] "Generic (PLEG): container finished" podID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerID="4de6411a4afb22cf2605ed9637f2740d5b4eae8aba99b0e5f2cfc322dd434901" exitCode=0 Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.182379 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerDied","Data":"4de6411a4afb22cf2605ed9637f2740d5b4eae8aba99b0e5f2cfc322dd434901"} Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.203729 4782 scope.go:117] "RemoveContainer" containerID="8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" Feb 02 10:52:10 crc kubenswrapper[4782]: E0202 10:52:10.205527 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee\": container with ID starting with 8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee not found: ID does not exist" containerID="8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.205554 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee"} err="failed to get container status \"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee\": rpc error: code = NotFound desc = could not find container \"8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee\": container with ID starting with 8561d8543b7dc1a4f75138ec4a65ca5430bac9f43d26c49205d0ba1b811aacee not found: ID does not exist" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.232711 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.236431 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sf9m8"] Feb 02 10:52:10 crc kubenswrapper[4782]: E0202 10:52:10.237714 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76afda26_696c_4996_bc58_1c928e4fa92a.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:52:10 crc kubenswrapper[4782]: I0202 10:52:10.837320 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76afda26-696c-4996-bc58-1c928e4fa92a" path="/var/lib/kubelet/pods/76afda26-696c-4996-bc58-1c928e4fa92a/volumes" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.188476 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerID="24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8" exitCode=0 Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.188794 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerDied","Data":"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8"} Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.428768 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.447103 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") pod \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.447223 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") pod \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.447357 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") pod \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\" (UID: \"499d9fd2-e479-4774-ad4b-aaefa3ac9026\") " Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.449851 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle" (OuterVolumeSpecName: "bundle") pod "499d9fd2-e479-4774-ad4b-aaefa3ac9026" (UID: "499d9fd2-e479-4774-ad4b-aaefa3ac9026"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.454170 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z" (OuterVolumeSpecName: "kube-api-access-c566z") pod "499d9fd2-e479-4774-ad4b-aaefa3ac9026" (UID: "499d9fd2-e479-4774-ad4b-aaefa3ac9026"). InnerVolumeSpecName "kube-api-access-c566z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.475213 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util" (OuterVolumeSpecName: "util") pod "499d9fd2-e479-4774-ad4b-aaefa3ac9026" (UID: "499d9fd2-e479-4774-ad4b-aaefa3ac9026"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.550279 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c566z\" (UniqueName: \"kubernetes.io/projected/499d9fd2-e479-4774-ad4b-aaefa3ac9026-kube-api-access-c566z\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.550543 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:11 crc kubenswrapper[4782]: I0202 10:52:11.550658 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/499d9fd2-e479-4774-ad4b-aaefa3ac9026-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:12 crc kubenswrapper[4782]: I0202 10:52:12.197342 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" event={"ID":"499d9fd2-e479-4774-ad4b-aaefa3ac9026","Type":"ContainerDied","Data":"f7fc72918ffce23fb6b318f086b25ec3564f4c9b6e4fd0bad204d7e25d878f2f"} Feb 02 10:52:12 crc kubenswrapper[4782]: I0202 10:52:12.197724 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7fc72918ffce23fb6b318f086b25ec3564f4c9b6e4fd0bad204d7e25d878f2f" Feb 02 10:52:12 crc kubenswrapper[4782]: I0202 10:52:12.197448 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6" Feb 02 10:52:13 crc kubenswrapper[4782]: I0202 10:52:13.205494 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerID="a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c" exitCode=0 Feb 02 10:52:13 crc kubenswrapper[4782]: I0202 10:52:13.205543 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerDied","Data":"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c"} Feb 02 10:52:14 crc kubenswrapper[4782]: I0202 10:52:14.213052 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerStarted","Data":"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691"} Feb 02 10:52:14 crc kubenswrapper[4782]: I0202 10:52:14.247943 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xnj2n" podStartSLOduration=2.592860101 podStartE2EDuration="5.247924678s" podCreationTimestamp="2026-02-02 10:52:09 +0000 UTC" firstStartedPulling="2026-02-02 10:52:11.191548875 +0000 UTC m=+811.075741591" lastFinishedPulling="2026-02-02 10:52:13.846613452 +0000 UTC m=+813.730806168" observedRunningTime="2026-02-02 10:52:14.244189321 +0000 UTC m=+814.128382057" watchObservedRunningTime="2026-02-02 10:52:14.247924678 +0000 UTC m=+814.132117394" Feb 02 10:52:19 crc kubenswrapper[4782]: I0202 10:52:19.902868 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:19 crc kubenswrapper[4782]: I0202 10:52:19.904532 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:20 crc kubenswrapper[4782]: I0202 10:52:20.958211 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xnj2n" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" probeResult="failure" output=< Feb 02 10:52:20 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:52:20 crc kubenswrapper[4782]: > Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.719117 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm"] Feb 02 10:52:22 crc kubenswrapper[4782]: E0202 10:52:22.720127 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="util" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.720220 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="util" Feb 02 10:52:22 crc kubenswrapper[4782]: E0202 10:52:22.720286 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="extract" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.720334 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="extract" Feb 02 10:52:22 crc kubenswrapper[4782]: E0202 10:52:22.720382 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="pull" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.720434 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="pull" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.720575 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="499d9fd2-e479-4774-ad4b-aaefa3ac9026" containerName="extract" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.721037 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.724438 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.725028 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-hdptp" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.725265 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.725367 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.725377 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.747758 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm"] Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.893704 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-webhook-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.893775 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-apiservice-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.893824 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjnrs\" (UniqueName: \"kubernetes.io/projected/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-kube-api-access-bjnrs\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.951126 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.951174 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.994904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjnrs\" (UniqueName: \"kubernetes.io/projected/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-kube-api-access-bjnrs\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.994960 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-webhook-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:22 crc kubenswrapper[4782]: I0202 10:52:22.995001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-apiservice-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.002911 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-apiservice-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.015994 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-webhook-cert\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.020854 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt"] Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.021608 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.022770 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjnrs\" (UniqueName: \"kubernetes.io/projected/46c800cc-f0c4-4bb1-9714-0f9e5f904bc9-kube-api-access-bjnrs\") pod \"metallb-operator-controller-manager-75c875dcc7-xxjwm\" (UID: \"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9\") " pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.034180 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.034180 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qvmq5" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.034180 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.038119 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.044502 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt"] Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.197818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-webhook-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.197936 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76dbw\" (UniqueName: \"kubernetes.io/projected/78f09d2d-237b-4474-b4b8-f59f49997e44-kube-api-access-76dbw\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.197981 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-apiservice-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.299327 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76dbw\" (UniqueName: \"kubernetes.io/projected/78f09d2d-237b-4474-b4b8-f59f49997e44-kube-api-access-76dbw\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.299394 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-apiservice-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.299443 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-webhook-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.304728 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-apiservice-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.313489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78f09d2d-237b-4474-b4b8-f59f49997e44-webhook-cert\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.321072 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76dbw\" (UniqueName: \"kubernetes.io/projected/78f09d2d-237b-4474-b4b8-f59f49997e44-kube-api-access-76dbw\") pod \"metallb-operator-webhook-server-758b4c4d7b-vvspt\" (UID: \"78f09d2d-237b-4474-b4b8-f59f49997e44\") " pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.326377 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm"] Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.430782 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:23 crc kubenswrapper[4782]: I0202 10:52:23.681577 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt"] Feb 02 10:52:23 crc kubenswrapper[4782]: W0202 10:52:23.705154 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78f09d2d_237b_4474_b4b8_f59f49997e44.slice/crio-99cbeba7932d76f57a85a80fd7807997333b4419069a544a6f35259a0f99181b WatchSource:0}: Error finding container 99cbeba7932d76f57a85a80fd7807997333b4419069a544a6f35259a0f99181b: Status 404 returned error can't find the container with id 99cbeba7932d76f57a85a80fd7807997333b4419069a544a6f35259a0f99181b Feb 02 10:52:24 crc kubenswrapper[4782]: I0202 10:52:24.262030 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" event={"ID":"78f09d2d-237b-4474-b4b8-f59f49997e44","Type":"ContainerStarted","Data":"99cbeba7932d76f57a85a80fd7807997333b4419069a544a6f35259a0f99181b"} Feb 02 10:52:24 crc kubenswrapper[4782]: I0202 10:52:24.263855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" event={"ID":"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9","Type":"ContainerStarted","Data":"d07cae36a89a00c97ff6399dcf9907b4d2c742ae250c0b8dee12a30c246a3562"} Feb 02 10:52:29 crc kubenswrapper[4782]: I0202 10:52:29.977847 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.037326 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.224551 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.310071 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" event={"ID":"46c800cc-f0c4-4bb1-9714-0f9e5f904bc9","Type":"ContainerStarted","Data":"09dfd459f1a5f0376ed1406f62a25382bfdf5c3f6aabcaca55c22c6b259e4990"} Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.310169 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.311498 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" event={"ID":"78f09d2d-237b-4474-b4b8-f59f49997e44","Type":"ContainerStarted","Data":"3d3636c0bba62836323e5623733d207bd7e79f36e27f04f7f728791486bc5539"} Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.328202 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" podStartSLOduration=2.499391369 podStartE2EDuration="8.328184795s" podCreationTimestamp="2026-02-02 10:52:22 +0000 UTC" firstStartedPulling="2026-02-02 10:52:23.334014866 +0000 UTC m=+823.218207582" lastFinishedPulling="2026-02-02 10:52:29.162808292 +0000 UTC m=+829.047001008" observedRunningTime="2026-02-02 10:52:30.327473655 +0000 UTC m=+830.211666371" watchObservedRunningTime="2026-02-02 10:52:30.328184795 +0000 UTC m=+830.212377511" Feb 02 10:52:30 crc kubenswrapper[4782]: I0202 10:52:30.352773 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" podStartSLOduration=2.8791660869999998 podStartE2EDuration="8.352751138s" podCreationTimestamp="2026-02-02 10:52:22 +0000 UTC" firstStartedPulling="2026-02-02 10:52:23.71048293 +0000 UTC m=+823.594675646" lastFinishedPulling="2026-02-02 10:52:29.184067981 +0000 UTC m=+829.068260697" observedRunningTime="2026-02-02 10:52:30.350269497 +0000 UTC m=+830.234462223" watchObservedRunningTime="2026-02-02 10:52:30.352751138 +0000 UTC m=+830.236943864" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.317189 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.317375 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xnj2n" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" containerID="cri-o://b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" gracePeriod=2 Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.717680 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.847779 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") pod \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.847889 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") pod \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.847944 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") pod \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\" (UID: \"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019\") " Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.849104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities" (OuterVolumeSpecName: "utilities") pod "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" (UID: "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.864805 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x" (OuterVolumeSpecName: "kube-api-access-wfw4x") pod "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" (UID: "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019"). InnerVolumeSpecName "kube-api-access-wfw4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.950341 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfw4x\" (UniqueName: \"kubernetes.io/projected/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-kube-api-access-wfw4x\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.950683 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:31 crc kubenswrapper[4782]: I0202 10:52:31.968296 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" (UID: "5b8a020d-5ce8-4a2e-b4dc-9a1c77990019"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.051968 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.324881 4782 generic.go:334] "Generic (PLEG): container finished" podID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerID="b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" exitCode=0 Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.324978 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerDied","Data":"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691"} Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.325023 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xnj2n" event={"ID":"5b8a020d-5ce8-4a2e-b4dc-9a1c77990019","Type":"ContainerDied","Data":"76b887cf6cdadf5ce40b5ae85a2c309ce7496129ec1523601bbfaf633d9f7f8d"} Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.325044 4782 scope.go:117] "RemoveContainer" containerID="b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.325978 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xnj2n" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.352676 4782 scope.go:117] "RemoveContainer" containerID="a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.361629 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.373489 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xnj2n"] Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.382961 4782 scope.go:117] "RemoveContainer" containerID="24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.401951 4782 scope.go:117] "RemoveContainer" containerID="b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" Feb 02 10:52:32 crc kubenswrapper[4782]: E0202 10:52:32.402890 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691\": container with ID starting with b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691 not found: ID does not exist" containerID="b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.403018 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691"} err="failed to get container status \"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691\": rpc error: code = NotFound desc = could not find container \"b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691\": container with ID starting with b3ce92496912caa7212c166a38024b5de02bb8bb1fab216548ac07041e011691 not found: ID does not exist" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.403124 4782 scope.go:117] "RemoveContainer" containerID="a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c" Feb 02 10:52:32 crc kubenswrapper[4782]: E0202 10:52:32.404798 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c\": container with ID starting with a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c not found: ID does not exist" containerID="a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.404944 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c"} err="failed to get container status \"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c\": rpc error: code = NotFound desc = could not find container \"a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c\": container with ID starting with a41fa6b53e6433abd8feb7c1fb83c9e5f2d2b6e7a6cb522205c9133d59fadd0c not found: ID does not exist" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.405053 4782 scope.go:117] "RemoveContainer" containerID="24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8" Feb 02 10:52:32 crc kubenswrapper[4782]: E0202 10:52:32.405419 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8\": container with ID starting with 24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8 not found: ID does not exist" containerID="24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.405531 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8"} err="failed to get container status \"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8\": rpc error: code = NotFound desc = could not find container \"24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8\": container with ID starting with 24ab8a82d0d49b356b82171d00afb071c4d564bc362b0bf0aecea02d6576fbc8 not found: ID does not exist" Feb 02 10:52:32 crc kubenswrapper[4782]: I0202 10:52:32.828800 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" path="/var/lib/kubelet/pods/5b8a020d-5ce8-4a2e-b4dc-9a1c77990019/volumes" Feb 02 10:52:43 crc kubenswrapper[4782]: I0202 10:52:43.436651 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-758b4c4d7b-vvspt" Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.951560 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.952203 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.952277 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.952968 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:52:52 crc kubenswrapper[4782]: I0202 10:52:52.953039 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c" gracePeriod=600 Feb 02 10:52:53 crc kubenswrapper[4782]: I0202 10:52:53.454809 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c" exitCode=0 Feb 02 10:52:53 crc kubenswrapper[4782]: I0202 10:52:53.454906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c"} Feb 02 10:52:53 crc kubenswrapper[4782]: I0202 10:52:53.455158 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1"} Feb 02 10:52:53 crc kubenswrapper[4782]: I0202 10:52:53.455191 4782 scope.go:117] "RemoveContainer" containerID="f6754467e1adce39d1aaf093b6b8963c3db110696bed13c171a5267d1c658dfc" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.041710 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75c875dcc7-xxjwm" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.698797 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72"] Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.699226 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="extract-utilities" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.699248 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="extract-utilities" Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.699263 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.699271 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.699286 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="extract-content" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.699294 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="extract-content" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.699439 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b8a020d-5ce8-4a2e-b4dc-9a1c77990019" containerName="registry-server" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.700159 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.703612 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.703756 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7njp5" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.710879 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-s297l"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.714441 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.719957 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.722854 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.714711 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830092 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr69b\" (UniqueName: \"kubernetes.io/projected/a3b12ebe-32d3-4d07-b723-64cd83951d38-kube-api-access-qr69b\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-startup\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830506 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-conf\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830583 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-sockets\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830657 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-reloader\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830686 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76jkp\" (UniqueName: \"kubernetes.io/projected/ef8673fb-6fdf-4c32-a573-3583f4188d97-kube-api-access-76jkp\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830706 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.830908 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics-certs\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.847447 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-w7rg8"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.848602 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w7rg8" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.851741 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.852020 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.852194 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.852338 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2dvzx" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.863916 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-wxfg2"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.864998 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.866863 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.881709 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wxfg2"] Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr69b\" (UniqueName: \"kubernetes.io/projected/a3b12ebe-32d3-4d07-b723-64cd83951d38-kube-api-access-qr69b\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-conf\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-startup\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932542 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932559 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-sockets\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932584 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-reloader\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932615 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76jkp\" (UniqueName: \"kubernetes.io/projected/ef8673fb-6fdf-4c32-a573-3583f4188d97-kube-api-access-76jkp\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932635 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.932742 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics-certs\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933125 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-reloader\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.933209 4782 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 02 10:53:03 crc kubenswrapper[4782]: E0202 10:53:03.933245 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert podName:a3b12ebe-32d3-4d07-b723-64cd83951d38 nodeName:}" failed. No retries permitted until 2026-02-02 10:53:04.433233971 +0000 UTC m=+864.317426687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert") pod "frr-k8s-webhook-server-7df86c4f6c-8zl72" (UID: "a3b12ebe-32d3-4d07-b723-64cd83951d38") : secret "frr-k8s-webhook-server-cert" not found Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933516 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-conf\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933794 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-startup\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933964 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-frr-sockets\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.933963 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.939053 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef8673fb-6fdf-4c32-a573-3583f4188d97-metrics-certs\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.960422 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr69b\" (UniqueName: \"kubernetes.io/projected/a3b12ebe-32d3-4d07-b723-64cd83951d38-kube-api-access-qr69b\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:03 crc kubenswrapper[4782]: I0202 10:53:03.960740 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76jkp\" (UniqueName: \"kubernetes.io/projected/ef8673fb-6fdf-4c32-a573-3583f4188d97-kube-api-access-76jkp\") pod \"frr-k8s-s297l\" (UID: \"ef8673fb-6fdf-4c32-a573-3583f4188d97\") " pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034417 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-metrics-certs\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034555 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-cert\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034681 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hnl\" (UniqueName: \"kubernetes.io/projected/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-kube-api-access-x8hnl\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034712 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhtjt\" (UniqueName: \"kubernetes.io/projected/7dcb22a8-d257-446a-8264-63b33c40e24a-kube-api-access-vhtjt\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034738 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-metrics-certs\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.034793 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dcb22a8-d257-446a-8264-63b33c40e24a-metallb-excludel2\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.089106 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136135 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hnl\" (UniqueName: \"kubernetes.io/projected/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-kube-api-access-x8hnl\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136206 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhtjt\" (UniqueName: \"kubernetes.io/projected/7dcb22a8-d257-446a-8264-63b33c40e24a-kube-api-access-vhtjt\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136235 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-metrics-certs\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136257 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dcb22a8-d257-446a-8264-63b33c40e24a-metallb-excludel2\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136315 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-cert\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.136345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-metrics-certs\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: E0202 10:53:04.136550 4782 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 10:53:04 crc kubenswrapper[4782]: E0202 10:53:04.136707 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist podName:7dcb22a8-d257-446a-8264-63b33c40e24a nodeName:}" failed. No retries permitted until 2026-02-02 10:53:04.6366256 +0000 UTC m=+864.520818396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist") pod "speaker-w7rg8" (UID: "7dcb22a8-d257-446a-8264-63b33c40e24a") : secret "metallb-memberlist" not found Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.137510 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7dcb22a8-d257-446a-8264-63b33c40e24a-metallb-excludel2\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.140762 4782 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.141811 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-metrics-certs\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.142533 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-metrics-certs\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.150593 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-cert\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.158365 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hnl\" (UniqueName: \"kubernetes.io/projected/1d7526eb-b4a4-4ba7-917c-cef512d2dc6a-kube-api-access-x8hnl\") pod \"controller-6968d8fdc4-wxfg2\" (UID: \"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a\") " pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.164795 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhtjt\" (UniqueName: \"kubernetes.io/projected/7dcb22a8-d257-446a-8264-63b33c40e24a-kube-api-access-vhtjt\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.209402 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.441927 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.445489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3b12ebe-32d3-4d07-b723-64cd83951d38-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8zl72\" (UID: \"a3b12ebe-32d3-4d07-b723-64cd83951d38\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.480226 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wxfg2"] Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.511051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"bd58aca5c60cbeb8f481ade1c282f404bf53e5c8cb2d15cf8058bef84f1330da"} Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.512184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wxfg2" event={"ID":"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a","Type":"ContainerStarted","Data":"fc0bcef6df83895a2f9a40e9bbe2eeab8ae8639023d878ee236eb87a57572d5f"} Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.645304 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:04 crc kubenswrapper[4782]: E0202 10:53:04.645752 4782 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 10:53:04 crc kubenswrapper[4782]: E0202 10:53:04.645804 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist podName:7dcb22a8-d257-446a-8264-63b33c40e24a nodeName:}" failed. No retries permitted until 2026-02-02 10:53:05.645788818 +0000 UTC m=+865.529981534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist") pod "speaker-w7rg8" (UID: "7dcb22a8-d257-446a-8264-63b33c40e24a") : secret "metallb-memberlist" not found Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.676216 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:04 crc kubenswrapper[4782]: I0202 10:53:04.935515 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72"] Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.526110 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wxfg2" event={"ID":"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a","Type":"ContainerStarted","Data":"9090f9af60ff0b73b1f47cea23c88281f6c81117878b3df1dd43d140724ae790"} Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.526157 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wxfg2" event={"ID":"1d7526eb-b4a4-4ba7-917c-cef512d2dc6a","Type":"ContainerStarted","Data":"4d3c622497c18e1835809575e4ff3791857ca302664eeb9ef1c7ab90c445e948"} Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.526252 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.527311 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" event={"ID":"a3b12ebe-32d3-4d07-b723-64cd83951d38","Type":"ContainerStarted","Data":"45386974435adf5dcf0051e07b1aee4b910803f51ecdcef62b9953933c9740bb"} Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.549488 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-wxfg2" podStartSLOduration=2.549467935 podStartE2EDuration="2.549467935s" podCreationTimestamp="2026-02-02 10:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:05.545082379 +0000 UTC m=+865.429275105" watchObservedRunningTime="2026-02-02 10:53:05.549467935 +0000 UTC m=+865.433660661" Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.661851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.670229 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7dcb22a8-d257-446a-8264-63b33c40e24a-memberlist\") pod \"speaker-w7rg8\" (UID: \"7dcb22a8-d257-446a-8264-63b33c40e24a\") " pod="metallb-system/speaker-w7rg8" Feb 02 10:53:05 crc kubenswrapper[4782]: I0202 10:53:05.683567 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w7rg8" Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.540103 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w7rg8" event={"ID":"7dcb22a8-d257-446a-8264-63b33c40e24a","Type":"ContainerStarted","Data":"3882d369caecd00974a1599da76dd7132434be43ec20af6d88ae8db114757189"} Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.540535 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w7rg8" event={"ID":"7dcb22a8-d257-446a-8264-63b33c40e24a","Type":"ContainerStarted","Data":"67a31c2c3509d3a5e446d2298904f632314777d8dc00e80deb0249f741f7f1a6"} Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.540555 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w7rg8" event={"ID":"7dcb22a8-d257-446a-8264-63b33c40e24a","Type":"ContainerStarted","Data":"ddb46dc6995313facc0b90babb21760431d879ba08ce7922996f41def4a3af89"} Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.540798 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-w7rg8" Feb 02 10:53:06 crc kubenswrapper[4782]: I0202 10:53:06.566584 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-w7rg8" podStartSLOduration=3.566568397 podStartE2EDuration="3.566568397s" podCreationTimestamp="2026-02-02 10:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:53:06.562869061 +0000 UTC m=+866.447061777" watchObservedRunningTime="2026-02-02 10:53:06.566568397 +0000 UTC m=+866.450761113" Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.227693 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-wxfg2" Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.597111 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" event={"ID":"a3b12ebe-32d3-4d07-b723-64cd83951d38","Type":"ContainerStarted","Data":"0ca37a8595781ef4eb014ca286c2f73e69d30bb8ca6948d9e0090d418990bb2f"} Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.597459 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.598804 4782 generic.go:334] "Generic (PLEG): container finished" podID="ef8673fb-6fdf-4c32-a573-3583f4188d97" containerID="d1f9150e4bd83cb9a3718e0687d175a4d21e3aa52005eb523790d22630b5d499" exitCode=0 Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.598834 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerDied","Data":"d1f9150e4bd83cb9a3718e0687d175a4d21e3aa52005eb523790d22630b5d499"} Feb 02 10:53:14 crc kubenswrapper[4782]: I0202 10:53:14.621208 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" podStartSLOduration=3.180896781 podStartE2EDuration="11.621193149s" podCreationTimestamp="2026-02-02 10:53:03 +0000 UTC" firstStartedPulling="2026-02-02 10:53:04.944896186 +0000 UTC m=+864.829088902" lastFinishedPulling="2026-02-02 10:53:13.385192554 +0000 UTC m=+873.269385270" observedRunningTime="2026-02-02 10:53:14.619507251 +0000 UTC m=+874.503699967" watchObservedRunningTime="2026-02-02 10:53:14.621193149 +0000 UTC m=+874.505385865" Feb 02 10:53:15 crc kubenswrapper[4782]: I0202 10:53:15.609152 4782 generic.go:334] "Generic (PLEG): container finished" podID="ef8673fb-6fdf-4c32-a573-3583f4188d97" containerID="4015ecf5aa26633862cb50b9b7b5f3e73dc6ee9dccd604b439998091b5317ad7" exitCode=0 Feb 02 10:53:15 crc kubenswrapper[4782]: I0202 10:53:15.609264 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerDied","Data":"4015ecf5aa26633862cb50b9b7b5f3e73dc6ee9dccd604b439998091b5317ad7"} Feb 02 10:53:16 crc kubenswrapper[4782]: I0202 10:53:16.619172 4782 generic.go:334] "Generic (PLEG): container finished" podID="ef8673fb-6fdf-4c32-a573-3583f4188d97" containerID="2ce571ef64fef7e6faff138d09b8ddc6e3b7b4ee44f43656754ad95f6dfc069d" exitCode=0 Feb 02 10:53:16 crc kubenswrapper[4782]: I0202 10:53:16.619237 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerDied","Data":"2ce571ef64fef7e6faff138d09b8ddc6e3b7b4ee44f43656754ad95f6dfc069d"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.632660 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"a3f361566a0cde9c3882d2ca7792f0f1f6dd6e4c6ca0e42f5bfa089290a31758"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.633626 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"77bd5259dcef66dd87009eb43baac72998e6af4f1526263eb371738f4ba447cb"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.633708 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"ba40206599728e80bb17f434c23a30c29fb8dbadd208629a8a4f8cb6f5e4a343"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.633722 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"60bec9446b5c0e34da3c90f2e6b4d5f4998a35ffdc28186c4b7af951e888b3c9"} Feb 02 10:53:17 crc kubenswrapper[4782]: I0202 10:53:17.633732 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"35b165ebe6f4073c72257573d1248bf09d283efb9efd34809751b0a28bada6bd"} Feb 02 10:53:18 crc kubenswrapper[4782]: I0202 10:53:18.646973 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-s297l" event={"ID":"ef8673fb-6fdf-4c32-a573-3583f4188d97","Type":"ContainerStarted","Data":"dde157fea63d879fe418c40328ef4bf1203698dacef0bbf2345db5f1e746c48d"} Feb 02 10:53:18 crc kubenswrapper[4782]: I0202 10:53:18.647999 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:18 crc kubenswrapper[4782]: I0202 10:53:18.671210 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-s297l" podStartSLOduration=6.458440501 podStartE2EDuration="15.67119048s" podCreationTimestamp="2026-02-02 10:53:03 +0000 UTC" firstStartedPulling="2026-02-02 10:53:04.216831745 +0000 UTC m=+864.101024451" lastFinishedPulling="2026-02-02 10:53:13.429581714 +0000 UTC m=+873.313774430" observedRunningTime="2026-02-02 10:53:18.669152191 +0000 UTC m=+878.553344927" watchObservedRunningTime="2026-02-02 10:53:18.67119048 +0000 UTC m=+878.555383206" Feb 02 10:53:19 crc kubenswrapper[4782]: I0202 10:53:19.089540 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:19 crc kubenswrapper[4782]: I0202 10:53:19.133083 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:24 crc kubenswrapper[4782]: I0202 10:53:24.690042 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8zl72" Feb 02 10:53:25 crc kubenswrapper[4782]: I0202 10:53:25.692541 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-w7rg8" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.836417 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ml428"] Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.839260 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.841324 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.841590 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fpgp6" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.843434 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.858998 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ml428"] Feb 02 10:53:31 crc kubenswrapper[4782]: I0202 10:53:31.972052 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nzvv\" (UniqueName: \"kubernetes.io/projected/504a2863-da7c-4a03-b973-0f687ca20746-kube-api-access-4nzvv\") pod \"openstack-operator-index-ml428\" (UID: \"504a2863-da7c-4a03-b973-0f687ca20746\") " pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.073315 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nzvv\" (UniqueName: \"kubernetes.io/projected/504a2863-da7c-4a03-b973-0f687ca20746-kube-api-access-4nzvv\") pod \"openstack-operator-index-ml428\" (UID: \"504a2863-da7c-4a03-b973-0f687ca20746\") " pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.096088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nzvv\" (UniqueName: \"kubernetes.io/projected/504a2863-da7c-4a03-b973-0f687ca20746-kube-api-access-4nzvv\") pod \"openstack-operator-index-ml428\" (UID: \"504a2863-da7c-4a03-b973-0f687ca20746\") " pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.185895 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.637056 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ml428"] Feb 02 10:53:32 crc kubenswrapper[4782]: I0202 10:53:32.729294 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ml428" event={"ID":"504a2863-da7c-4a03-b973-0f687ca20746","Type":"ContainerStarted","Data":"7a527c6ac7c0e6b201896da98324705c08623625c01be9905c43359c6018808a"} Feb 02 10:53:34 crc kubenswrapper[4782]: I0202 10:53:34.091948 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-s297l" Feb 02 10:53:35 crc kubenswrapper[4782]: I0202 10:53:35.748901 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ml428" event={"ID":"504a2863-da7c-4a03-b973-0f687ca20746","Type":"ContainerStarted","Data":"ac975d33c75f19f9be9f4fd04ddbf772b3a6a045fe9eb2985335e19bc898cfa8"} Feb 02 10:53:35 crc kubenswrapper[4782]: I0202 10:53:35.763750 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ml428" podStartSLOduration=2.197265886 podStartE2EDuration="4.763721738s" podCreationTimestamp="2026-02-02 10:53:31 +0000 UTC" firstStartedPulling="2026-02-02 10:53:32.645245891 +0000 UTC m=+892.529438627" lastFinishedPulling="2026-02-02 10:53:35.211701753 +0000 UTC m=+895.095894479" observedRunningTime="2026-02-02 10:53:35.76203418 +0000 UTC m=+895.646226906" watchObservedRunningTime="2026-02-02 10:53:35.763721738 +0000 UTC m=+895.647914474" Feb 02 10:53:42 crc kubenswrapper[4782]: I0202 10:53:42.186269 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:42 crc kubenswrapper[4782]: I0202 10:53:42.186687 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:42 crc kubenswrapper[4782]: I0202 10:53:42.214572 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:42 crc kubenswrapper[4782]: I0202 10:53:42.834129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ml428" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.283247 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh"] Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.284659 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.295474 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xgqcq" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.298121 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh"] Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.345856 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.346396 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.346433 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.448326 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.448502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.448625 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.449041 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.449313 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.467716 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") pod \"930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.616929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:44 crc kubenswrapper[4782]: I0202 10:53:44.871146 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh"] Feb 02 10:53:45 crc kubenswrapper[4782]: I0202 10:53:45.816299 4782 generic.go:334] "Generic (PLEG): container finished" podID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerID="0f4bd2528c24be8b50d21847eaec3477c04c302ad2bba04bb4b9c1eb7fa8ad6f" exitCode=0 Feb 02 10:53:45 crc kubenswrapper[4782]: I0202 10:53:45.816508 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerDied","Data":"0f4bd2528c24be8b50d21847eaec3477c04c302ad2bba04bb4b9c1eb7fa8ad6f"} Feb 02 10:53:45 crc kubenswrapper[4782]: I0202 10:53:45.817169 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerStarted","Data":"2c97412fb26337a0ccfa4e88ca937f911bb93e2ca588bdd44449119803a07801"} Feb 02 10:53:46 crc kubenswrapper[4782]: I0202 10:53:46.826850 4782 generic.go:334] "Generic (PLEG): container finished" podID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerID="026e19d34ba2d799042f63db2424aea9e4f5f07a6a4103945ac8d0baa8e1ab2a" exitCode=0 Feb 02 10:53:46 crc kubenswrapper[4782]: I0202 10:53:46.830376 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerDied","Data":"026e19d34ba2d799042f63db2424aea9e4f5f07a6a4103945ac8d0baa8e1ab2a"} Feb 02 10:53:47 crc kubenswrapper[4782]: I0202 10:53:47.849279 4782 generic.go:334] "Generic (PLEG): container finished" podID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerID="8dfbe940e50b058cf629bb46e1067b5e84ce8d5159ffe85d7b7c01b767a0aa84" exitCode=0 Feb 02 10:53:47 crc kubenswrapper[4782]: I0202 10:53:47.849358 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerDied","Data":"8dfbe940e50b058cf629bb46e1067b5e84ce8d5159ffe85d7b7c01b767a0aa84"} Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.088258 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.215748 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") pod \"120b307b-b163-4e00-be79-cacf3e7e84e1\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.215817 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") pod \"120b307b-b163-4e00-be79-cacf3e7e84e1\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.215866 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") pod \"120b307b-b163-4e00-be79-cacf3e7e84e1\" (UID: \"120b307b-b163-4e00-be79-cacf3e7e84e1\") " Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.216940 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle" (OuterVolumeSpecName: "bundle") pod "120b307b-b163-4e00-be79-cacf3e7e84e1" (UID: "120b307b-b163-4e00-be79-cacf3e7e84e1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.222085 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg" (OuterVolumeSpecName: "kube-api-access-zz5lg") pod "120b307b-b163-4e00-be79-cacf3e7e84e1" (UID: "120b307b-b163-4e00-be79-cacf3e7e84e1"). InnerVolumeSpecName "kube-api-access-zz5lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.236798 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util" (OuterVolumeSpecName: "util") pod "120b307b-b163-4e00-be79-cacf3e7e84e1" (UID: "120b307b-b163-4e00-be79-cacf3e7e84e1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.317664 4782 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.317916 4782 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/120b307b-b163-4e00-be79-cacf3e7e84e1-util\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.317985 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz5lg\" (UniqueName: \"kubernetes.io/projected/120b307b-b163-4e00-be79-cacf3e7e84e1-kube-api-access-zz5lg\") on node \"crc\" DevicePath \"\"" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.862832 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" event={"ID":"120b307b-b163-4e00-be79-cacf3e7e84e1","Type":"ContainerDied","Data":"2c97412fb26337a0ccfa4e88ca937f911bb93e2ca588bdd44449119803a07801"} Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.862869 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c97412fb26337a0ccfa4e88ca937f911bb93e2ca588bdd44449119803a07801" Feb 02 10:53:49 crc kubenswrapper[4782]: I0202 10:53:49.862979 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.691607 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m"] Feb 02 10:53:53 crc kubenswrapper[4782]: E0202 10:53:53.692209 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="pull" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692224 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="pull" Feb 02 10:53:53 crc kubenswrapper[4782]: E0202 10:53:53.692245 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="extract" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692252 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="extract" Feb 02 10:53:53 crc kubenswrapper[4782]: E0202 10:53:53.692267 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="util" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692274 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="util" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692428 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="120b307b-b163-4e00-be79-cacf3e7e84e1" containerName="extract" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.692895 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.696166 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-wnscv" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.776703 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lzhz\" (UniqueName: \"kubernetes.io/projected/c12a72da-af7d-4f2e-b15d-bb90fa6bd818-kube-api-access-5lzhz\") pod \"openstack-operator-controller-init-68b945c8c7-jwf5m\" (UID: \"c12a72da-af7d-4f2e-b15d-bb90fa6bd818\") " pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.800279 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m"] Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.878181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lzhz\" (UniqueName: \"kubernetes.io/projected/c12a72da-af7d-4f2e-b15d-bb90fa6bd818-kube-api-access-5lzhz\") pod \"openstack-operator-controller-init-68b945c8c7-jwf5m\" (UID: \"c12a72da-af7d-4f2e-b15d-bb90fa6bd818\") " pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:53 crc kubenswrapper[4782]: I0202 10:53:53.918315 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lzhz\" (UniqueName: \"kubernetes.io/projected/c12a72da-af7d-4f2e-b15d-bb90fa6bd818-kube-api-access-5lzhz\") pod \"openstack-operator-controller-init-68b945c8c7-jwf5m\" (UID: \"c12a72da-af7d-4f2e-b15d-bb90fa6bd818\") " pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:54 crc kubenswrapper[4782]: I0202 10:53:54.009613 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:53:54 crc kubenswrapper[4782]: I0202 10:53:54.363656 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m"] Feb 02 10:53:54 crc kubenswrapper[4782]: I0202 10:53:54.899118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" event={"ID":"c12a72da-af7d-4f2e-b15d-bb90fa6bd818","Type":"ContainerStarted","Data":"bf806aa920ee8763ac21b423ddb02d614044fdabae3502ff8c129ee6a13bf39a"} Feb 02 10:54:00 crc kubenswrapper[4782]: I0202 10:54:00.955965 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" event={"ID":"c12a72da-af7d-4f2e-b15d-bb90fa6bd818","Type":"ContainerStarted","Data":"4f690700cb253cbec709c144d0fd093081f25eb893cb99a3f1a19c5f737ccbe8"} Feb 02 10:54:00 crc kubenswrapper[4782]: I0202 10:54:00.957392 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:54:00 crc kubenswrapper[4782]: I0202 10:54:00.988202 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" podStartSLOduration=2.430693278 podStartE2EDuration="7.988181711s" podCreationTimestamp="2026-02-02 10:53:53 +0000 UTC" firstStartedPulling="2026-02-02 10:53:54.37318804 +0000 UTC m=+914.257380756" lastFinishedPulling="2026-02-02 10:53:59.930676483 +0000 UTC m=+919.814869189" observedRunningTime="2026-02-02 10:54:00.981710096 +0000 UTC m=+920.865902822" watchObservedRunningTime="2026-02-02 10:54:00.988181711 +0000 UTC m=+920.872374427" Feb 02 10:54:14 crc kubenswrapper[4782]: I0202 10:54:14.012196 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-68b945c8c7-jwf5m" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.785084 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.786606 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.788870 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pz896" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.800692 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.801441 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.804082 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-85x2p" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.807313 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.815915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgvh\" (UniqueName: \"kubernetes.io/projected/0aa487d3-a703-4ed6-a44c-bc40eb8272ce-kube-api-access-tfgvh\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5ngrn\" (UID: \"0aa487d3-a703-4ed6-a44c-bc40eb8272ce\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.819770 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.820900 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.828020 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tmxnj" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.833871 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.854001 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.854840 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.858043 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sz9fm" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.864548 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.890462 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.891445 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.895827 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2n6rr" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.903295 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919410 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbb7\" (UniqueName: \"kubernetes.io/projected/9ba082c6-4f91-48d6-b5ec-198f46abc135-kube-api-access-5sbb7\") pod \"designate-operator-controller-manager-6d9697b7f4-5vj4j\" (UID: \"9ba082c6-4f91-48d6-b5ec-198f46abc135\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919502 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7298\" (UniqueName: \"kubernetes.io/projected/bfafd643-4798-4519-934d-8ec3e2e677d9-kube-api-access-g7298\") pod \"cinder-operator-controller-manager-8d874c8fc-vj4sh\" (UID: \"bfafd643-4798-4519-934d-8ec3e2e677d9\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919664 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh95q\" (UniqueName: \"kubernetes.io/projected/7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7-kube-api-access-kh95q\") pod \"heat-operator-controller-manager-69d6db494d-fkwh5\" (UID: \"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919700 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq2js\" (UniqueName: \"kubernetes.io/projected/b03fe987-deab-47e7-829a-b822ab061f20-kube-api-access-cq2js\") pod \"glance-operator-controller-manager-8886f4c47-v7tzl\" (UID: \"b03fe987-deab-47e7-829a-b822ab061f20\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.919719 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgvh\" (UniqueName: \"kubernetes.io/projected/0aa487d3-a703-4ed6-a44c-bc40eb8272ce-kube-api-access-tfgvh\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5ngrn\" (UID: \"0aa487d3-a703-4ed6-a44c-bc40eb8272ce\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.921790 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.922492 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.932778 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-67xzx" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.939068 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.952583 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.972775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgvh\" (UniqueName: \"kubernetes.io/projected/0aa487d3-a703-4ed6-a44c-bc40eb8272ce-kube-api-access-tfgvh\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-5ngrn\" (UID: \"0aa487d3-a703-4ed6-a44c-bc40eb8272ce\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.991548 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv"] Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.992302 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:33 crc kubenswrapper[4782]: I0202 10:54:33.996178 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-n6j25" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.007442 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.008248 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.018866 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.019061 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2qdwh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020186 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh95q\" (UniqueName: \"kubernetes.io/projected/7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7-kube-api-access-kh95q\") pod \"heat-operator-controller-manager-69d6db494d-fkwh5\" (UID: \"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020227 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6b4\" (UniqueName: \"kubernetes.io/projected/224f30b2-1084-4934-8d06-67975a9776ad-kube-api-access-fm6b4\") pod \"horizon-operator-controller-manager-5fb775575f-7z5k7\" (UID: \"224f30b2-1084-4934-8d06-67975a9776ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq2js\" (UniqueName: \"kubernetes.io/projected/b03fe987-deab-47e7-829a-b822ab061f20-kube-api-access-cq2js\") pod \"glance-operator-controller-manager-8886f4c47-v7tzl\" (UID: \"b03fe987-deab-47e7-829a-b822ab061f20\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020295 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sbb7\" (UniqueName: \"kubernetes.io/projected/9ba082c6-4f91-48d6-b5ec-198f46abc135-kube-api-access-5sbb7\") pod \"designate-operator-controller-manager-6d9697b7f4-5vj4j\" (UID: \"9ba082c6-4f91-48d6-b5ec-198f46abc135\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7298\" (UniqueName: \"kubernetes.io/projected/bfafd643-4798-4519-934d-8ec3e2e677d9-kube-api-access-g7298\") pod \"cinder-operator-controller-manager-8d874c8fc-vj4sh\" (UID: \"bfafd643-4798-4519-934d-8ec3e2e677d9\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.020359 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2z8x\" (UniqueName: \"kubernetes.io/projected/6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27-kube-api-access-p2z8x\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v94dv\" (UID: \"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.026922 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.027903 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.044951 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.062155 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qksl6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.066891 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sbb7\" (UniqueName: \"kubernetes.io/projected/9ba082c6-4f91-48d6-b5ec-198f46abc135-kube-api-access-5sbb7\") pod \"designate-operator-controller-manager-6d9697b7f4-5vj4j\" (UID: \"9ba082c6-4f91-48d6-b5ec-198f46abc135\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.066970 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.096241 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh95q\" (UniqueName: \"kubernetes.io/projected/7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7-kube-api-access-kh95q\") pod \"heat-operator-controller-manager-69d6db494d-fkwh5\" (UID: \"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.098013 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.099306 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.105019 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq2js\" (UniqueName: \"kubernetes.io/projected/b03fe987-deab-47e7-829a-b822ab061f20-kube-api-access-cq2js\") pod \"glance-operator-controller-manager-8886f4c47-v7tzl\" (UID: \"b03fe987-deab-47e7-829a-b822ab061f20\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.109311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7298\" (UniqueName: \"kubernetes.io/projected/bfafd643-4798-4519-934d-8ec3e2e677d9-kube-api-access-g7298\") pod \"cinder-operator-controller-manager-8d874c8fc-vj4sh\" (UID: \"bfafd643-4798-4519-934d-8ec3e2e677d9\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.109669 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5tgth" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.118357 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121166 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4j9k\" (UniqueName: \"kubernetes.io/projected/3624e93f-9208-4f82-9f55-12381a637262-kube-api-access-s4j9k\") pod \"mariadb-operator-controller-manager-67bf948998-n88d6\" (UID: \"3624e93f-9208-4f82-9f55-12381a637262\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121248 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6b4\" (UniqueName: \"kubernetes.io/projected/224f30b2-1084-4934-8d06-67975a9776ad-kube-api-access-fm6b4\") pod \"horizon-operator-controller-manager-5fb775575f-7z5k7\" (UID: \"224f30b2-1084-4934-8d06-67975a9776ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121324 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121422 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2j9r\" (UniqueName: \"kubernetes.io/projected/009bc68d-5c70-42ca-9008-152206fd954d-kube-api-access-n2j9r\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121570 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6dsh\" (UniqueName: \"kubernetes.io/projected/6b276ac2-533f-43c9-94a1-f0d0e4eb6993-kube-api-access-m6dsh\") pod \"keystone-operator-controller-manager-84f48565d4-w7gld\" (UID: \"6b276ac2-533f-43c9-94a1-f0d0e4eb6993\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.121605 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2z8x\" (UniqueName: \"kubernetes.io/projected/6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27-kube-api-access-p2z8x\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v94dv\" (UID: \"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.133183 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.144881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.147800 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.158994 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.159950 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.177629 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.183706 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.207676 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p6wdr" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.222846 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2z8x\" (UniqueName: \"kubernetes.io/projected/6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27-kube-api-access-p2z8x\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v94dv\" (UID: \"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223373 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2j9r\" (UniqueName: \"kubernetes.io/projected/009bc68d-5c70-42ca-9008-152206fd954d-kube-api-access-n2j9r\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223769 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6dsh\" (UniqueName: \"kubernetes.io/projected/6b276ac2-533f-43c9-94a1-f0d0e4eb6993-kube-api-access-m6dsh\") pod \"keystone-operator-controller-manager-84f48565d4-w7gld\" (UID: \"6b276ac2-533f-43c9-94a1-f0d0e4eb6993\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223813 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4j9k\" (UniqueName: \"kubernetes.io/projected/3624e93f-9208-4f82-9f55-12381a637262-kube-api-access-s4j9k\") pod \"mariadb-operator-controller-manager-67bf948998-n88d6\" (UID: \"3624e93f-9208-4f82-9f55-12381a637262\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.223852 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.224006 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.224049 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:34.724036339 +0000 UTC m=+954.608229055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.271489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6b4\" (UniqueName: \"kubernetes.io/projected/224f30b2-1084-4934-8d06-67975a9776ad-kube-api-access-fm6b4\") pod \"horizon-operator-controller-manager-5fb775575f-7z5k7\" (UID: \"224f30b2-1084-4934-8d06-67975a9776ad\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.273808 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.313410 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2j9r\" (UniqueName: \"kubernetes.io/projected/009bc68d-5c70-42ca-9008-152206fd954d-kube-api-access-n2j9r\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.316477 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6dsh\" (UniqueName: \"kubernetes.io/projected/6b276ac2-533f-43c9-94a1-f0d0e4eb6993-kube-api-access-m6dsh\") pod \"keystone-operator-controller-manager-84f48565d4-w7gld\" (UID: \"6b276ac2-533f-43c9-94a1-f0d0e4eb6993\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.320282 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.326018 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlm4l\" (UniqueName: \"kubernetes.io/projected/f44c1b55-d189-42dd-9187-90d9e0713790-kube-api-access-dlm4l\") pod \"manila-operator-controller-manager-7dd968899f-scr7v\" (UID: \"f44c1b55-d189-42dd-9187-90d9e0713790\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.399317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4j9k\" (UniqueName: \"kubernetes.io/projected/3624e93f-9208-4f82-9f55-12381a637262-kube-api-access-s4j9k\") pod \"mariadb-operator-controller-manager-67bf948998-n88d6\" (UID: \"3624e93f-9208-4f82-9f55-12381a637262\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.436710 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.437496 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.447803 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r8g5n" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.447997 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.448746 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.459056 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cksq6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.459279 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.461430 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlm4l\" (UniqueName: \"kubernetes.io/projected/f44c1b55-d189-42dd-9187-90d9e0713790-kube-api-access-dlm4l\") pod \"manila-operator-controller-manager-7dd968899f-scr7v\" (UID: \"f44c1b55-d189-42dd-9187-90d9e0713790\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.469739 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.490570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlm4l\" (UniqueName: \"kubernetes.io/projected/f44c1b55-d189-42dd-9187-90d9e0713790-kube-api-access-dlm4l\") pod \"manila-operator-controller-manager-7dd968899f-scr7v\" (UID: \"f44c1b55-d189-42dd-9187-90d9e0713790\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.516384 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.519417 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.536743 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fnsz7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.538571 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.549803 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.553689 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.554909 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.560345 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.563007 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pbb\" (UniqueName: \"kubernetes.io/projected/216a79cc-1b33-43f7-81ff-400a3b6f3d00-kube-api-access-p6pbb\") pod \"neutron-operator-controller-manager-585dbc889-l9q78\" (UID: \"216a79cc-1b33-43f7-81ff-400a3b6f3d00\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.563501 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg46x\" (UniqueName: \"kubernetes.io/projected/ab3a96ec-3e51-4147-9a58-6596f2c3ad5c-kube-api-access-wg46x\") pod \"nova-operator-controller-manager-55bff696bd-v8zfh\" (UID: \"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.577116 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.588955 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-5szrt" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.597115 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.598463 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.623412 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.643893 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.644784 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.649298 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zj447" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664377 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ctp\" (UniqueName: \"kubernetes.io/projected/6c7ac81b-49d3-493d-a794-1cffe78eba5e-kube-api-access-89ctp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664450 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg46x\" (UniqueName: \"kubernetes.io/projected/ab3a96ec-3e51-4147-9a58-6596f2c3ad5c-kube-api-access-wg46x\") pod \"nova-operator-controller-manager-55bff696bd-v8zfh\" (UID: \"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664510 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6pbb\" (UniqueName: \"kubernetes.io/projected/216a79cc-1b33-43f7-81ff-400a3b6f3d00-kube-api-access-p6pbb\") pod \"neutron-operator-controller-manager-585dbc889-l9q78\" (UID: \"216a79cc-1b33-43f7-81ff-400a3b6f3d00\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664528 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xr5s\" (UniqueName: \"kubernetes.io/projected/7e19a281-abaa-462e-abc7-add4acff7865-kube-api-access-6xr5s\") pod \"octavia-operator-controller-manager-6687f8d877-r9dkb\" (UID: \"7e19a281-abaa-462e-abc7-add4acff7865\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.664548 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.703083 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6pbb\" (UniqueName: \"kubernetes.io/projected/216a79cc-1b33-43f7-81ff-400a3b6f3d00-kube-api-access-p6pbb\") pod \"neutron-operator-controller-manager-585dbc889-l9q78\" (UID: \"216a79cc-1b33-43f7-81ff-400a3b6f3d00\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.723564 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg46x\" (UniqueName: \"kubernetes.io/projected/ab3a96ec-3e51-4147-9a58-6596f2c3ad5c-kube-api-access-wg46x\") pod \"nova-operator-controller-manager-55bff696bd-v8zfh\" (UID: \"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.734901 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.753861 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.754621 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.763110 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.764101 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.772285 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-r9w5d" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.772821 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9b45h" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.773751 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xr5s\" (UniqueName: \"kubernetes.io/projected/7e19a281-abaa-462e-abc7-add4acff7865-kube-api-access-6xr5s\") pod \"octavia-operator-controller-manager-6687f8d877-r9dkb\" (UID: \"7e19a281-abaa-462e-abc7-add4acff7865\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.773884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.774021 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89ctp\" (UniqueName: \"kubernetes.io/projected/6c7ac81b-49d3-493d-a794-1cffe78eba5e-kube-api-access-89ctp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.774193 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcvgr\" (UniqueName: \"kubernetes.io/projected/2f8b3b48-0c03-4922-8966-a3aaca8ebce3-kube-api-access-fcvgr\") pod \"ovn-operator-controller-manager-788c46999f-9ls2x\" (UID: \"2f8b3b48-0c03-4922-8966-a3aaca8ebce3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.774337 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.774567 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.775570 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.775659 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:35.275628381 +0000 UTC m=+955.159821097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: E0202 10:54:34.777975 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:35.777936497 +0000 UTC m=+955.662129283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.780226 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.790701 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.800072 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.806003 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.807634 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.816551 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89ctp\" (UniqueName: \"kubernetes.io/projected/6c7ac81b-49d3-493d-a794-1cffe78eba5e-kube-api-access-89ctp\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.823181 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-z8frt" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.827138 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.864395 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.865520 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-k7t28"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.869025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.871762 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.877434 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4l7w\" (UniqueName: \"kubernetes.io/projected/1661d177-41b5-4df5-886f-f3cb7abd1047-kube-api-access-c4l7w\") pod \"swift-operator-controller-manager-68fc8c869-xnzl4\" (UID: \"1661d177-41b5-4df5-886f-f3cb7abd1047\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.877694 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh8bj\" (UniqueName: \"kubernetes.io/projected/6ac6c6b4-9123-4c39-b26f-b07880c1a6c6-kube-api-access-zh8bj\") pod \"placement-operator-controller-manager-5b964cf4cd-dmncd\" (UID: \"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.877899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcvgr\" (UniqueName: \"kubernetes.io/projected/2f8b3b48-0c03-4922-8966-a3aaca8ebce3-kube-api-access-fcvgr\") pod \"ovn-operator-controller-manager-788c46999f-9ls2x\" (UID: \"2f8b3b48-0c03-4922-8966-a3aaca8ebce3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.878424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xr5s\" (UniqueName: \"kubernetes.io/projected/7e19a281-abaa-462e-abc7-add4acff7865-kube-api-access-6xr5s\") pod \"octavia-operator-controller-manager-6687f8d877-r9dkb\" (UID: \"7e19a281-abaa-462e-abc7-add4acff7865\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.885052 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nsztx" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.885327 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5zkfp" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.893721 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.905292 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.919177 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcvgr\" (UniqueName: \"kubernetes.io/projected/2f8b3b48-0c03-4922-8966-a3aaca8ebce3-kube-api-access-fcvgr\") pod \"ovn-operator-controller-manager-788c46999f-9ls2x\" (UID: \"2f8b3b48-0c03-4922-8966-a3aaca8ebce3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.923058 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m"] Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.987873 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvk7\" (UniqueName: \"kubernetes.io/projected/0fd2f609-78f1-4f82-b405-35b5312baf0d-kube-api-access-bhvk7\") pod \"test-operator-controller-manager-56f8bfcd9f-82nk8\" (UID: \"0fd2f609-78f1-4f82-b405-35b5312baf0d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.987929 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96hgx\" (UniqueName: \"kubernetes.io/projected/127c9a45-7187-4afb-bb45-c34a45e67e4e-kube-api-access-96hgx\") pod \"watcher-operator-controller-manager-564965969-k7t28\" (UID: \"127c9a45-7187-4afb-bb45-c34a45e67e4e\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.987978 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4l7w\" (UniqueName: \"kubernetes.io/projected/1661d177-41b5-4df5-886f-f3cb7abd1047-kube-api-access-c4l7w\") pod \"swift-operator-controller-manager-68fc8c869-xnzl4\" (UID: \"1661d177-41b5-4df5-886f-f3cb7abd1047\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.992792 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh8bj\" (UniqueName: \"kubernetes.io/projected/6ac6c6b4-9123-4c39-b26f-b07880c1a6c6-kube-api-access-zh8bj\") pod \"placement-operator-controller-manager-5b964cf4cd-dmncd\" (UID: \"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:34 crc kubenswrapper[4782]: I0202 10:54:34.992934 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c24q\" (UniqueName: \"kubernetes.io/projected/c617a97c-fec4-418c-818a-250919ea6882-kube-api-access-6c24q\") pod \"telemetry-operator-controller-manager-64b5b76f97-ckl5m\" (UID: \"c617a97c-fec4-418c-818a-250919ea6882\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.045058 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-k7t28"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.050135 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4l7w\" (UniqueName: \"kubernetes.io/projected/1661d177-41b5-4df5-886f-f3cb7abd1047-kube-api-access-c4l7w\") pod \"swift-operator-controller-manager-68fc8c869-xnzl4\" (UID: \"1661d177-41b5-4df5-886f-f3cb7abd1047\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.085105 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh8bj\" (UniqueName: \"kubernetes.io/projected/6ac6c6b4-9123-4c39-b26f-b07880c1a6c6-kube-api-access-zh8bj\") pod \"placement-operator-controller-manager-5b964cf4cd-dmncd\" (UID: \"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.098436 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c24q\" (UniqueName: \"kubernetes.io/projected/c617a97c-fec4-418c-818a-250919ea6882-kube-api-access-6c24q\") pod \"telemetry-operator-controller-manager-64b5b76f97-ckl5m\" (UID: \"c617a97c-fec4-418c-818a-250919ea6882\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.098507 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhvk7\" (UniqueName: \"kubernetes.io/projected/0fd2f609-78f1-4f82-b405-35b5312baf0d-kube-api-access-bhvk7\") pod \"test-operator-controller-manager-56f8bfcd9f-82nk8\" (UID: \"0fd2f609-78f1-4f82-b405-35b5312baf0d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.098529 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96hgx\" (UniqueName: \"kubernetes.io/projected/127c9a45-7187-4afb-bb45-c34a45e67e4e-kube-api-access-96hgx\") pod \"watcher-operator-controller-manager-564965969-k7t28\" (UID: \"127c9a45-7187-4afb-bb45-c34a45e67e4e\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.133163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.140333 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96hgx\" (UniqueName: \"kubernetes.io/projected/127c9a45-7187-4afb-bb45-c34a45e67e4e-kube-api-access-96hgx\") pod \"watcher-operator-controller-manager-564965969-k7t28\" (UID: \"127c9a45-7187-4afb-bb45-c34a45e67e4e\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.141245 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.146285 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c24q\" (UniqueName: \"kubernetes.io/projected/c617a97c-fec4-418c-818a-250919ea6882-kube-api-access-6c24q\") pod \"telemetry-operator-controller-manager-64b5b76f97-ckl5m\" (UID: \"c617a97c-fec4-418c-818a-250919ea6882\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.154095 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.172928 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.210684 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhvk7\" (UniqueName: \"kubernetes.io/projected/0fd2f609-78f1-4f82-b405-35b5312baf0d-kube-api-access-bhvk7\") pod \"test-operator-controller-manager-56f8bfcd9f-82nk8\" (UID: \"0fd2f609-78f1-4f82-b405-35b5312baf0d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.214308 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.241465 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.344494 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.344664 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.344717 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:36.344693244 +0000 UTC m=+956.228885960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.375389 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.381269 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.389059 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.389274 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.389587 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mbtgq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.416238 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.427193 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.428819 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.437185 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.439385 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7fj8h" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.457089 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn"] Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.548140 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.548497 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.548655 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nx7r\" (UniqueName: \"kubernetes.io/projected/5844bcff-6d6e-4cf4-89af-dfecfc748869-kube-api-access-9nx7r\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.548690 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccmjk\" (UniqueName: \"kubernetes.io/projected/83a0d24e-3e0c-4d9a-b735-77c74ceec664-kube-api-access-ccmjk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjztq\" (UID: \"83a0d24e-3e0c-4d9a-b735-77c74ceec664\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.649909 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.649993 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nx7r\" (UniqueName: \"kubernetes.io/projected/5844bcff-6d6e-4cf4-89af-dfecfc748869-kube-api-access-9nx7r\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.650031 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccmjk\" (UniqueName: \"kubernetes.io/projected/83a0d24e-3e0c-4d9a-b735-77c74ceec664-kube-api-access-ccmjk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjztq\" (UID: \"83a0d24e-3e0c-4d9a-b735-77c74ceec664\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.650112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.650134 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.650221 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:36.150198415 +0000 UTC m=+956.034391161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.650249 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.650302 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:36.150284658 +0000 UTC m=+956.034477454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.675267 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nx7r\" (UniqueName: \"kubernetes.io/projected/5844bcff-6d6e-4cf4-89af-dfecfc748869-kube-api-access-9nx7r\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.676193 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccmjk\" (UniqueName: \"kubernetes.io/projected/83a0d24e-3e0c-4d9a-b735-77c74ceec664-kube-api-access-ccmjk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jjztq\" (UID: \"83a0d24e-3e0c-4d9a-b735-77c74ceec664\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.688754 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv"] Feb 02 10:54:35 crc kubenswrapper[4782]: W0202 10:54:35.718242 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a74bdcf_4aaf_4fd7_b24d_7cb1d47d1f27.slice/crio-9175a4c17dbbd4d7e70979978b3ad2c4bba6344db32e1a7c43dfc94182b5a7f8 WatchSource:0}: Error finding container 9175a4c17dbbd4d7e70979978b3ad2c4bba6344db32e1a7c43dfc94182b5a7f8: Status 404 returned error can't find the container with id 9175a4c17dbbd4d7e70979978b3ad2c4bba6344db32e1a7c43dfc94182b5a7f8 Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.772000 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.779080 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl"] Feb 02 10:54:35 crc kubenswrapper[4782]: W0202 10:54:35.795135 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb03fe987_deab_47e7_829a_b822ab061f20.slice/crio-3ce5d1b59e2a3b6599174b2a49a4d08835b7c45a25e9b91de63435097f06f65b WatchSource:0}: Error finding container 3ce5d1b59e2a3b6599174b2a49a4d08835b7c45a25e9b91de63435097f06f65b: Status 404 returned error can't find the container with id 3ce5d1b59e2a3b6599174b2a49a4d08835b7c45a25e9b91de63435097f06f65b Feb 02 10:54:35 crc kubenswrapper[4782]: I0202 10:54:35.851820 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.852024 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:35 crc kubenswrapper[4782]: E0202 10:54:35.852069 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:37.852055861 +0000 UTC m=+957.736248577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.040227 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.051401 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.160197 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.160420 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.160361 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.160580 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:37.160562438 +0000 UTC m=+957.044755154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.160523 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.161112 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:37.161085053 +0000 UTC m=+957.045277769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.263231 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" event={"ID":"bfafd643-4798-4519-934d-8ec3e2e677d9","Type":"ContainerStarted","Data":"cf6bb7ee3b8ed2e620ea9ba0767292c04d917653e5004d318c2f9dc5ee752f5c"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.269214 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" event={"ID":"b03fe987-deab-47e7-829a-b822ab061f20","Type":"ContainerStarted","Data":"3ce5d1b59e2a3b6599174b2a49a4d08835b7c45a25e9b91de63435097f06f65b"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.271434 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" event={"ID":"0aa487d3-a703-4ed6-a44c-bc40eb8272ce","Type":"ContainerStarted","Data":"34da07131024ed19f1828056678064437119bf41e29b997621e545c3ae57965f"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.272561 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" event={"ID":"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27","Type":"ContainerStarted","Data":"9175a4c17dbbd4d7e70979978b3ad2c4bba6344db32e1a7c43dfc94182b5a7f8"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.276445 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" event={"ID":"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7","Type":"ContainerStarted","Data":"9bbedd0724574344844979ca0727761925d888b96504245184405f06deb399b1"} Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.365944 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.366182 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.366237 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:38.366218782 +0000 UTC m=+958.250411498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.405597 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.420565 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.424343 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.444924 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.448734 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f8b3b48_0c03_4922_8966_a3aaca8ebce3.slice/crio-1f7e908fef4cda953228525de1e8c66943d612ca7926988818315db5194a5720 WatchSource:0}: Error finding container 1f7e908fef4cda953228525de1e8c66943d612ca7926988818315db5194a5720: Status 404 returned error can't find the container with id 1f7e908fef4cda953228525de1e8c66943d612ca7926988818315db5194a5720 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.465367 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.471677 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.480736 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.514330 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod224f30b2_1084_4934_8d06_67975a9776ad.slice/crio-d49860db0a252b08815f475658b06d6075373dc3676a21a1f281a03dd6455070 WatchSource:0}: Error finding container d49860db0a252b08815f475658b06d6075373dc3676a21a1f281a03dd6455070: Status 404 returned error can't find the container with id d49860db0a252b08815f475658b06d6075373dc3676a21a1f281a03dd6455070 Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.515873 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3624e93f_9208_4f82_9f55_12381a637262.slice/crio-ceaf04d9ecda10f1b2f84982afb5c97478378ae2d121d6dbb686d5306fcd7e45 WatchSource:0}: Error finding container ceaf04d9ecda10f1b2f84982afb5c97478378ae2d121d6dbb686d5306fcd7e45: Status 404 returned error can't find the container with id ceaf04d9ecda10f1b2f84982afb5c97478378ae2d121d6dbb686d5306fcd7e45 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.518748 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.526218 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf44c1b55_d189_42dd_9187_90d9e0713790.slice/crio-72453069a538ff3d4ee103e73d35c5b5e229930d8ca261e33dbad0825cd0bbdd WatchSource:0}: Error finding container 72453069a538ff3d4ee103e73d35c5b5e229930d8ca261e33dbad0825cd0bbdd: Status 404 returned error can't find the container with id 72453069a538ff3d4ee103e73d35c5b5e229930d8ca261e33dbad0825cd0bbdd Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.531231 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ba082c6_4f91_48d6_b5ec_198f46abc135.slice/crio-d72e14f5f1b0c6db5b52155ca3ce938a154f057143662da5243cb46ac2c357bf WatchSource:0}: Error finding container d72e14f5f1b0c6db5b52155ca3ce938a154f057143662da5243cb46ac2c357bf: Status 404 returned error can't find the container with id d72e14f5f1b0c6db5b52155ca3ce938a154f057143662da5243cb46ac2c357bf Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.533356 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab3a96ec_3e51_4147_9a58_6596f2c3ad5c.slice/crio-e690268bcbbb0d4726f69b983c8e32177793cf5f57c6ea12b3626b9353f533a9 WatchSource:0}: Error finding container e690268bcbbb0d4726f69b983c8e32177793cf5f57c6ea12b3626b9353f533a9: Status 404 returned error can't find the container with id e690268bcbbb0d4726f69b983c8e32177793cf5f57c6ea12b3626b9353f533a9 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.775408 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.799676 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc617a97c_fec4_418c_818a_250919ea6882.slice/crio-2e4e94145c393a8d054ec897f4a8def10443e698c703f6d23497348749c5d036 WatchSource:0}: Error finding container 2e4e94145c393a8d054ec897f4a8def10443e698c703f6d23497348749c5d036: Status 404 returned error can't find the container with id 2e4e94145c393a8d054ec897f4a8def10443e698c703f6d23497348749c5d036 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.801830 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.809439 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216a79cc_1b33_43f7_81ff_400a3b6f3d00.slice/crio-92e5b321a3523e8cd0cf49988d3019b96543c8b371500b937b2f3c3c45234634 WatchSource:0}: Error finding container 92e5b321a3523e8cd0cf49988d3019b96543c8b371500b937b2f3c3c45234634: Status 404 returned error can't find the container with id 92e5b321a3523e8cd0cf49988d3019b96543c8b371500b937b2f3c3c45234634 Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.812254 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-k7t28"] Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.815570 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1661d177_41b5_4df5_886f_f3cb7abd1047.slice/crio-931605dc0e7a67f33e8afecfc843689cd86f5f1ef384db4b2ffdddfe0a90b1d5 WatchSource:0}: Error finding container 931605dc0e7a67f33e8afecfc843689cd86f5f1ef384db4b2ffdddfe0a90b1d5: Status 404 returned error can't find the container with id 931605dc0e7a67f33e8afecfc843689cd86f5f1ef384db4b2ffdddfe0a90b1d5 Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.822804 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod127c9a45_7187_4afb_bb45_c34a45e67e4e.slice/crio-0e0ea35ba17901399d83ac3e18c6d6813fe58d9ff66e4f151e7c941555929c97 WatchSource:0}: Error finding container 0e0ea35ba17901399d83ac3e18c6d6813fe58d9ff66e4f151e7c941555929c97: Status 404 returned error can't find the container with id 0e0ea35ba17901399d83ac3e18c6d6813fe58d9ff66e4f151e7c941555929c97 Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.849381 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83a0d24e_3e0c_4d9a_b735_77c74ceec664.slice/crio-7c629075a3bf3234909da16bbe216484184ddb4350f6654129e6dbbcc9c84faa WatchSource:0}: Error finding container 7c629075a3bf3234909da16bbe216484184ddb4350f6654129e6dbbcc9c84faa: Status 404 returned error can't find the container with id 7c629075a3bf3234909da16bbe216484184ddb4350f6654129e6dbbcc9c84faa Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.855422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.855460 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8"] Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.855472 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd"] Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.855536 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96hgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-k7t28_openstack-operators(127c9a45-7187-4afb-bb45-c34a45e67e4e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.856764 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" podUID="127c9a45-7187-4afb-bb45-c34a45e67e4e" Feb 02 10:54:36 crc kubenswrapper[4782]: I0202 10:54:36.859416 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq"] Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.860370 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zh8bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-dmncd_openstack-operators(6ac6c6b4-9123-4c39-b26f-b07880c1a6c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:54:36 crc kubenswrapper[4782]: W0202 10:54:36.860460 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fd2f609_78f1_4f82_b405_35b5312baf0d.slice/crio-d562fc95b2597f379b066735180bdfd01511e335bb96bfaff8ac1aee04a5746a WatchSource:0}: Error finding container d562fc95b2597f379b066735180bdfd01511e335bb96bfaff8ac1aee04a5746a: Status 404 returned error can't find the container with id d562fc95b2597f379b066735180bdfd01511e335bb96bfaff8ac1aee04a5746a Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.861566 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" podUID="6ac6c6b4-9123-4c39-b26f-b07880c1a6c6" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.869161 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccmjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jjztq_openstack-operators(83a0d24e-3e0c-4d9a-b735-77c74ceec664): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.870899 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.871303 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhvk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-82nk8_openstack-operators(0fd2f609-78f1-4f82-b405-35b5312baf0d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 10:54:36 crc kubenswrapper[4782]: E0202 10:54:36.872448 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" podUID="0fd2f609-78f1-4f82-b405-35b5312baf0d" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.182330 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.182398 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.182560 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.182564 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.182622 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:39.182604242 +0000 UTC m=+959.066796958 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.182658 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:39.182632022 +0000 UTC m=+959.066824738 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.304816 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" event={"ID":"2f8b3b48-0c03-4922-8966-a3aaca8ebce3","Type":"ContainerStarted","Data":"1f7e908fef4cda953228525de1e8c66943d612ca7926988818315db5194a5720"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.308265 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" event={"ID":"9ba082c6-4f91-48d6-b5ec-198f46abc135","Type":"ContainerStarted","Data":"d72e14f5f1b0c6db5b52155ca3ce938a154f057143662da5243cb46ac2c357bf"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.312531 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" event={"ID":"127c9a45-7187-4afb-bb45-c34a45e67e4e","Type":"ContainerStarted","Data":"0e0ea35ba17901399d83ac3e18c6d6813fe58d9ff66e4f151e7c941555929c97"} Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.314517 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" podUID="127c9a45-7187-4afb-bb45-c34a45e67e4e" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.332174 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" event={"ID":"0fd2f609-78f1-4f82-b405-35b5312baf0d","Type":"ContainerStarted","Data":"d562fc95b2597f379b066735180bdfd01511e335bb96bfaff8ac1aee04a5746a"} Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.334804 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" podUID="0fd2f609-78f1-4f82-b405-35b5312baf0d" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.355439 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" event={"ID":"f44c1b55-d189-42dd-9187-90d9e0713790","Type":"ContainerStarted","Data":"72453069a538ff3d4ee103e73d35c5b5e229930d8ca261e33dbad0825cd0bbdd"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.368688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" event={"ID":"216a79cc-1b33-43f7-81ff-400a3b6f3d00","Type":"ContainerStarted","Data":"92e5b321a3523e8cd0cf49988d3019b96543c8b371500b937b2f3c3c45234634"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.370832 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" event={"ID":"7e19a281-abaa-462e-abc7-add4acff7865","Type":"ContainerStarted","Data":"6fda1fdfe0e000cf9c1116a1a708c51087f85bbe69ff03b13e4547aadbccf774"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.377223 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" event={"ID":"1661d177-41b5-4df5-886f-f3cb7abd1047","Type":"ContainerStarted","Data":"931605dc0e7a67f33e8afecfc843689cd86f5f1ef384db4b2ffdddfe0a90b1d5"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.378839 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" event={"ID":"3624e93f-9208-4f82-9f55-12381a637262","Type":"ContainerStarted","Data":"ceaf04d9ecda10f1b2f84982afb5c97478378ae2d121d6dbb686d5306fcd7e45"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.380346 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" event={"ID":"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c","Type":"ContainerStarted","Data":"e690268bcbbb0d4726f69b983c8e32177793cf5f57c6ea12b3626b9353f533a9"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.382823 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" event={"ID":"224f30b2-1084-4934-8d06-67975a9776ad","Type":"ContainerStarted","Data":"d49860db0a252b08815f475658b06d6075373dc3676a21a1f281a03dd6455070"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.388840 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" event={"ID":"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6","Type":"ContainerStarted","Data":"4f74dc5ccf6504e63e573fdadf4da0d9f398ef65b2e2bf5b9ca76ff28893b469"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.392595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" event={"ID":"c617a97c-fec4-418c-818a-250919ea6882","Type":"ContainerStarted","Data":"2e4e94145c393a8d054ec897f4a8def10443e698c703f6d23497348749c5d036"} Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.393509 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" podUID="6ac6c6b4-9123-4c39-b26f-b07880c1a6c6" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.396333 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" event={"ID":"83a0d24e-3e0c-4d9a-b735-77c74ceec664","Type":"ContainerStarted","Data":"7c629075a3bf3234909da16bbe216484184ddb4350f6654129e6dbbcc9c84faa"} Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.400878 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.408785 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" event={"ID":"6b276ac2-533f-43c9-94a1-f0d0e4eb6993","Type":"ContainerStarted","Data":"03350220f8352457df8eaf923250d9eaa5c2f783704256a1ce36ba6564c7d2ac"} Feb 02 10:54:37 crc kubenswrapper[4782]: I0202 10:54:37.940304 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.940437 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:37 crc kubenswrapper[4782]: E0202 10:54:37.940580 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:41.940562508 +0000 UTC m=+961.824755224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:38 crc kubenswrapper[4782]: I0202 10:54:38.448000 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.448332 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.448405 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:42.448388608 +0000 UTC m=+962.332581324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.453177 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" podUID="0fd2f609-78f1-4f82-b405-35b5312baf0d" Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.453275 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" podUID="6ac6c6b4-9123-4c39-b26f-b07880c1a6c6" Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.453374 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:54:38 crc kubenswrapper[4782]: E0202 10:54:38.453508 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" podUID="127c9a45-7187-4afb-bb45-c34a45e67e4e" Feb 02 10:54:39 crc kubenswrapper[4782]: I0202 10:54:39.265358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:39 crc kubenswrapper[4782]: I0202 10:54:39.265713 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:39 crc kubenswrapper[4782]: E0202 10:54:39.265830 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:39 crc kubenswrapper[4782]: E0202 10:54:39.265914 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:39 crc kubenswrapper[4782]: E0202 10:54:39.265920 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:43.265895859 +0000 UTC m=+963.150088615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:39 crc kubenswrapper[4782]: E0202 10:54:39.266015 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:43.265994712 +0000 UTC m=+963.150187478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:42 crc kubenswrapper[4782]: I0202 10:54:42.008408 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:42 crc kubenswrapper[4782]: E0202 10:54:42.008599 4782 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:42 crc kubenswrapper[4782]: E0202 10:54:42.009115 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert podName:009bc68d-5c70-42ca-9008-152206fd954d nodeName:}" failed. No retries permitted until 2026-02-02 10:54:50.009098878 +0000 UTC m=+969.893291594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert") pod "infra-operator-controller-manager-79955696d6-nsx4j" (UID: "009bc68d-5c70-42ca-9008-152206fd954d") : secret "infra-operator-webhook-server-cert" not found Feb 02 10:54:42 crc kubenswrapper[4782]: I0202 10:54:42.515031 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:42 crc kubenswrapper[4782]: E0202 10:54:42.515213 4782 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:42 crc kubenswrapper[4782]: E0202 10:54:42.515276 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert podName:6c7ac81b-49d3-493d-a794-1cffe78eba5e nodeName:}" failed. No retries permitted until 2026-02-02 10:54:50.515260031 +0000 UTC m=+970.399452747 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" (UID: "6c7ac81b-49d3-493d-a794-1cffe78eba5e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 10:54:43 crc kubenswrapper[4782]: I0202 10:54:43.326536 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:43 crc kubenswrapper[4782]: I0202 10:54:43.326983 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:43 crc kubenswrapper[4782]: E0202 10:54:43.327137 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:43 crc kubenswrapper[4782]: E0202 10:54:43.327196 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:51.327179512 +0000 UTC m=+971.211372238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:43 crc kubenswrapper[4782]: E0202 10:54:43.327541 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:43 crc kubenswrapper[4782]: E0202 10:54:43.327566 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:54:51.327558513 +0000 UTC m=+971.211751219 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.831838 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.833934 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.843211 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.895165 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.895314 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.895426 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.996678 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.996749 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.996798 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.997217 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:47 crc kubenswrapper[4782]: I0202 10:54:47.997431 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:48 crc kubenswrapper[4782]: I0202 10:54:48.022686 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") pod \"community-operators-xbp4r\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:48 crc kubenswrapper[4782]: I0202 10:54:48.166228 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.028753 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.034189 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/009bc68d-5c70-42ca-9008-152206fd954d-cert\") pod \"infra-operator-controller-manager-79955696d6-nsx4j\" (UID: \"009bc68d-5c70-42ca-9008-152206fd954d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.125965 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2qdwh" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.132846 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.535554 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.545557 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c7ac81b-49d3-493d-a794-1cffe78eba5e-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf\" (UID: \"6c7ac81b-49d3-493d-a794-1cffe78eba5e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.774841 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fnsz7" Feb 02 10:54:50 crc kubenswrapper[4782]: I0202 10:54:50.783574 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:54:50 crc kubenswrapper[4782]: E0202 10:54:50.938565 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Feb 02 10:54:50 crc kubenswrapper[4782]: E0202 10:54:50.938810 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sbb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-5vj4j_openstack-operators(9ba082c6-4f91-48d6-b5ec-198f46abc135): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:50 crc kubenswrapper[4782]: E0202 10:54:50.940254 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" podUID="9ba082c6-4f91-48d6-b5ec-198f46abc135" Feb 02 10:54:51 crc kubenswrapper[4782]: I0202 10:54:51.361504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:51 crc kubenswrapper[4782]: I0202 10:54:51.361579 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.361716 4782 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.361747 4782 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.361791 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:55:07.361771202 +0000 UTC m=+987.245963978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "metrics-server-cert" not found Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.361824 4782 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs podName:5844bcff-6d6e-4cf4-89af-dfecfc748869 nodeName:}" failed. No retries permitted until 2026-02-02 10:55:07.361804463 +0000 UTC m=+987.245997179 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs") pod "openstack-operator-controller-manager-6b655fd757-r6hxp" (UID: "5844bcff-6d6e-4cf4-89af-dfecfc748869") : secret "webhook-server-cert" not found Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.537547 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" podUID="9ba082c6-4f91-48d6-b5ec-198f46abc135" Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.620503 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.620724 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq2js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-v7tzl_openstack-operators(b03fe987-deab-47e7-829a-b822ab061f20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:51 crc kubenswrapper[4782]: E0202 10:54:51.621984 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" podUID="b03fe987-deab-47e7-829a-b822ab061f20" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.224513 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.224979 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlm4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-scr7v_openstack-operators(f44c1b55-d189-42dd-9187-90d9e0713790): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.226417 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" podUID="f44c1b55-d189-42dd-9187-90d9e0713790" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.546041 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" podUID="b03fe987-deab-47e7-829a-b822ab061f20" Feb 02 10:54:52 crc kubenswrapper[4782]: E0202 10:54:52.550187 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" podUID="f44c1b55-d189-42dd-9187-90d9e0713790" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.284670 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.284855 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6c24q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-ckl5m_openstack-operators(c617a97c-fec4-418c-818a-250919ea6882): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.286161 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" podUID="c617a97c-fec4-418c-818a-250919ea6882" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.550464 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" podUID="c617a97c-fec4-418c-818a-250919ea6882" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.952793 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.954064 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fm6b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-7z5k7_openstack-operators(224f30b2-1084-4934-8d06-67975a9776ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:53 crc kubenswrapper[4782]: E0202 10:54:53.955592 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" podUID="224f30b2-1084-4934-8d06-67975a9776ad" Feb 02 10:54:54 crc kubenswrapper[4782]: E0202 10:54:54.564116 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" podUID="224f30b2-1084-4934-8d06-67975a9776ad" Feb 02 10:54:54 crc kubenswrapper[4782]: E0202 10:54:54.659549 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Feb 02 10:54:54 crc kubenswrapper[4782]: E0202 10:54:54.659814 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fcvgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-9ls2x_openstack-operators(2f8b3b48-0c03-4922-8966-a3aaca8ebce3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:54 crc kubenswrapper[4782]: E0202 10:54:54.662331 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" podUID="2f8b3b48-0c03-4922-8966-a3aaca8ebce3" Feb 02 10:54:55 crc kubenswrapper[4782]: E0202 10:54:55.575122 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" podUID="2f8b3b48-0c03-4922-8966-a3aaca8ebce3" Feb 02 10:54:56 crc kubenswrapper[4782]: E0202 10:54:56.823850 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 02 10:54:56 crc kubenswrapper[4782]: E0202 10:54:56.824114 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6pbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-l9q78_openstack-operators(216a79cc-1b33-43f7-81ff-400a3b6f3d00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:56 crc kubenswrapper[4782]: E0202 10:54:56.825316 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" podUID="216a79cc-1b33-43f7-81ff-400a3b6f3d00" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.461468 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.461680 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c4l7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-xnzl4_openstack-operators(1661d177-41b5-4df5-886f-f3cb7abd1047): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.463490 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" podUID="1661d177-41b5-4df5-886f-f3cb7abd1047" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.586013 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" podUID="1661d177-41b5-4df5-886f-f3cb7abd1047" Feb 02 10:54:57 crc kubenswrapper[4782]: E0202 10:54:57.586029 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" podUID="216a79cc-1b33-43f7-81ff-400a3b6f3d00" Feb 02 10:55:03 crc kubenswrapper[4782]: E0202 10:55:03.717587 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Feb 02 10:55:03 crc kubenswrapper[4782]: E0202 10:55:03.718359 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg46x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-v8zfh_openstack-operators(ab3a96ec-3e51-4147-9a58-6596f2c3ad5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:55:03 crc kubenswrapper[4782]: E0202 10:55:03.719548 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" podUID="ab3a96ec-3e51-4147-9a58-6596f2c3ad5c" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.099845 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.100027 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6dsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-w7gld_openstack-operators(6b276ac2-533f-43c9-94a1-f0d0e4eb6993): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.101270 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" podUID="6b276ac2-533f-43c9-94a1-f0d0e4eb6993" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.632814 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" podUID="ab3a96ec-3e51-4147-9a58-6596f2c3ad5c" Feb 02 10:55:04 crc kubenswrapper[4782]: E0202 10:55:04.633120 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" podUID="6b276ac2-533f-43c9-94a1-f0d0e4eb6993" Feb 02 10:55:05 crc kubenswrapper[4782]: I0202 10:55:05.271489 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j"] Feb 02 10:55:05 crc kubenswrapper[4782]: E0202 10:55:05.402491 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 02 10:55:05 crc kubenswrapper[4782]: E0202 10:55:05.402718 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccmjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jjztq_openstack-operators(83a0d24e-3e0c-4d9a-b735-77c74ceec664): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:55:05 crc kubenswrapper[4782]: E0202 10:55:05.404303 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:55:05 crc kubenswrapper[4782]: I0202 10:55:05.637203 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" event={"ID":"009bc68d-5c70-42ca-9008-152206fd954d","Type":"ContainerStarted","Data":"da4402e37ed82a7f4a067b442c749e7a17b2e65fd5f3d52f4b496f73e00bc8d9"} Feb 02 10:55:05 crc kubenswrapper[4782]: I0202 10:55:05.907959 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:55:05 crc kubenswrapper[4782]: I0202 10:55:05.984015 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf"] Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.653676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" event={"ID":"7e19a281-abaa-462e-abc7-add4acff7865","Type":"ContainerStarted","Data":"8a7669b9ffd8842827dbd0ea76d4378ea45540c7f78eedb3314951a0c3431141"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.654043 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.672412 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" event={"ID":"7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7","Type":"ContainerStarted","Data":"2e4c00b2b01b9748426a51cf9f83b438e434cb1d716dd23b5d6e2a9bc73bfc74"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.672616 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.683190 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" event={"ID":"6ac6c6b4-9123-4c39-b26f-b07880c1a6c6","Type":"ContainerStarted","Data":"07b6dc4fad32236b864b817b9a59e1d6cc14873f25477de952a1137481d824ce"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.683820 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.688983 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" event={"ID":"9ba082c6-4f91-48d6-b5ec-198f46abc135","Type":"ContainerStarted","Data":"0d5fe81c8bd5c973948077090c1b65bc4df2d529f17695ec421625ed71168cb4"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.689325 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.690364 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" event={"ID":"6c7ac81b-49d3-493d-a794-1cffe78eba5e","Type":"ContainerStarted","Data":"cbaef421110963b4f1260f599c00f469f812c9a532c8d7d7047d492ad6bc8e00"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.702046 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" event={"ID":"0aa487d3-a703-4ed6-a44c-bc40eb8272ce","Type":"ContainerStarted","Data":"a9a5b4f3c50e1a091bb1452666170603f6922e4075de68696799858a65323dee"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.703004 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.712158 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" podStartSLOduration=10.451201834 podStartE2EDuration="32.71213669s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.463160536 +0000 UTC m=+956.347353252" lastFinishedPulling="2026-02-02 10:54:58.724095392 +0000 UTC m=+978.608288108" observedRunningTime="2026-02-02 10:55:06.703521673 +0000 UTC m=+986.587714399" watchObservedRunningTime="2026-02-02 10:55:06.71213669 +0000 UTC m=+986.596329406" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.722517 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" event={"ID":"3624e93f-9208-4f82-9f55-12381a637262","Type":"ContainerStarted","Data":"ec0b389260c6f87eef2659ae958ae5c0c5e264acf7f052fe4108f6b9c26b04a3"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.723331 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.724994 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" event={"ID":"0fd2f609-78f1-4f82-b405-35b5312baf0d","Type":"ContainerStarted","Data":"17f65d53be9c6b4ee41c6622bf4e3ddf8630d502bc376ae0c7e01407e6d57858"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.725533 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.730667 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" event={"ID":"bfafd643-4798-4519-934d-8ec3e2e677d9","Type":"ContainerStarted","Data":"51ab08bfb364b2577d0fd99d76329de4dc232409a7c9b636cdb2ba9b71c025e5"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.731449 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.733375 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" event={"ID":"127c9a45-7187-4afb-bb45-c34a45e67e4e","Type":"ContainerStarted","Data":"4040af90f8b61be86713c087bb897d6a2b26a0e4fc725dc9db0dc5df98d30869"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.733976 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.740063 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" event={"ID":"6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27","Type":"ContainerStarted","Data":"838bb51d90701c23340bc34b35aa3491f5dc59f16bb005b171eaa12389abe82a"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.740471 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.755943 4782 generic.go:334] "Generic (PLEG): container finished" podID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerID="88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444" exitCode=0 Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.755989 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerDied","Data":"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.756013 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerStarted","Data":"b35a4e0e6150963a20819d29e6e20270f93008d6cf7aa812ef8e1c21fd13b16f"} Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.758927 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" podStartSLOduration=4.133216635 podStartE2EDuration="32.758906138s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.859505947 +0000 UTC m=+956.743698663" lastFinishedPulling="2026-02-02 10:55:05.48519545 +0000 UTC m=+985.369388166" observedRunningTime="2026-02-02 10:55:06.753750221 +0000 UTC m=+986.637942957" watchObservedRunningTime="2026-02-02 10:55:06.758906138 +0000 UTC m=+986.643098854" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.812326 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" podStartSLOduration=4.833076669 podStartE2EDuration="33.812309607s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.53350974 +0000 UTC m=+956.417702456" lastFinishedPulling="2026-02-02 10:55:05.512742678 +0000 UTC m=+985.396935394" observedRunningTime="2026-02-02 10:55:06.807833958 +0000 UTC m=+986.692026674" watchObservedRunningTime="2026-02-02 10:55:06.812309607 +0000 UTC m=+986.696502323" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.866667 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" podStartSLOduration=11.233119495 podStartE2EDuration="33.866622481s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.088251439 +0000 UTC m=+955.972444155" lastFinishedPulling="2026-02-02 10:54:58.721754425 +0000 UTC m=+978.605947141" observedRunningTime="2026-02-02 10:55:06.861092263 +0000 UTC m=+986.745284979" watchObservedRunningTime="2026-02-02 10:55:06.866622481 +0000 UTC m=+986.750815197" Feb 02 10:55:06 crc kubenswrapper[4782]: I0202 10:55:06.941059 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" podStartSLOduration=9.762214005 podStartE2EDuration="33.94103777s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:35.590369233 +0000 UTC m=+955.474561949" lastFinishedPulling="2026-02-02 10:54:59.769192998 +0000 UTC m=+979.653385714" observedRunningTime="2026-02-02 10:55:06.927840043 +0000 UTC m=+986.812032759" watchObservedRunningTime="2026-02-02 10:55:06.94103777 +0000 UTC m=+986.825230486" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.023448 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" podStartSLOduration=4.399355132 podStartE2EDuration="33.023427208s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.849743108 +0000 UTC m=+956.733935824" lastFinishedPulling="2026-02-02 10:55:05.473815184 +0000 UTC m=+985.358007900" observedRunningTime="2026-02-02 10:55:07.021016339 +0000 UTC m=+986.905209075" watchObservedRunningTime="2026-02-02 10:55:07.023427208 +0000 UTC m=+986.907619924" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.053012 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" podStartSLOduration=12.044230127 podStartE2EDuration="34.052991034s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.061741841 +0000 UTC m=+955.945934567" lastFinishedPulling="2026-02-02 10:54:58.070502758 +0000 UTC m=+977.954695474" observedRunningTime="2026-02-02 10:55:07.046432246 +0000 UTC m=+986.930624962" watchObservedRunningTime="2026-02-02 10:55:07.052991034 +0000 UTC m=+986.937183760" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.151306 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" podStartSLOduration=10.961613231 podStartE2EDuration="33.151288697s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.532228753 +0000 UTC m=+956.416421469" lastFinishedPulling="2026-02-02 10:54:58.721904219 +0000 UTC m=+978.606096935" observedRunningTime="2026-02-02 10:55:07.140843008 +0000 UTC m=+987.025035744" watchObservedRunningTime="2026-02-02 10:55:07.151288697 +0000 UTC m=+987.035481413" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.233903 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" podStartSLOduration=12.488276756 podStartE2EDuration="34.233882901s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:35.732798249 +0000 UTC m=+955.616990965" lastFinishedPulling="2026-02-02 10:54:57.478404394 +0000 UTC m=+977.362597110" observedRunningTime="2026-02-02 10:55:07.210748638 +0000 UTC m=+987.094941354" watchObservedRunningTime="2026-02-02 10:55:07.233882901 +0000 UTC m=+987.118075627" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.250183 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" podStartSLOduration=4.498170339 podStartE2EDuration="33.250164686s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.871000406 +0000 UTC m=+956.755193122" lastFinishedPulling="2026-02-02 10:55:05.622994753 +0000 UTC m=+985.507187469" observedRunningTime="2026-02-02 10:55:07.231340878 +0000 UTC m=+987.115533594" watchObservedRunningTime="2026-02-02 10:55:07.250164686 +0000 UTC m=+987.134357402" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.452051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.452117 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.461192 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-webhook-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.461775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5844bcff-6d6e-4cf4-89af-dfecfc748869-metrics-certs\") pod \"openstack-operator-controller-manager-6b655fd757-r6hxp\" (UID: \"5844bcff-6d6e-4cf4-89af-dfecfc748869\") " pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.532262 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mbtgq" Feb 02 10:55:07 crc kubenswrapper[4782]: I0202 10:55:07.539530 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.050303 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp"] Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.753961 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.756034 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.772760 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.809889 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" event={"ID":"5844bcff-6d6e-4cf4-89af-dfecfc748869","Type":"ContainerStarted","Data":"a1a9b1299ba024edbd925b8c7a33b3032edca71552d93e831208ed8cf858eac0"} Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.809959 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" event={"ID":"5844bcff-6d6e-4cf4-89af-dfecfc748869","Type":"ContainerStarted","Data":"ea2c828cebf1dd80e8979e92dd2323e06bfe516f2e6a7a50b9b6a02581784071"} Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.854769 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" podStartSLOduration=33.854752474 podStartE2EDuration="33.854752474s" podCreationTimestamp="2026-02-02 10:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:55:08.847872787 +0000 UTC m=+988.732065503" watchObservedRunningTime="2026-02-02 10:55:08.854752474 +0000 UTC m=+988.738945190" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.900890 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.901206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:08 crc kubenswrapper[4782]: I0202 10:55:08.901331 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.002994 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.003080 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.003105 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.003696 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.004295 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.039005 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") pod \"certified-operators-6x8zf\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.085117 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:09 crc kubenswrapper[4782]: I0202 10:55:09.820977 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:10 crc kubenswrapper[4782]: I0202 10:55:10.041403 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:10 crc kubenswrapper[4782]: I0202 10:55:10.832506 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerStarted","Data":"5e81cbef9e833505fd0df87090b3b8a5e200225b581ca1d4a7afe27bab6d1427"} Feb 02 10:55:12 crc kubenswrapper[4782]: I0202 10:55:12.848216 4782 generic.go:334] "Generic (PLEG): container finished" podID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerID="02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f" exitCode=0 Feb 02 10:55:12 crc kubenswrapper[4782]: I0202 10:55:12.849761 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerDied","Data":"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.352049 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.354037 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.376077 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.470609 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.470676 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.470717 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.575784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.575900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.575919 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.576317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.576530 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.609806 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") pod \"redhat-marketplace-zphk7\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.672113 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.855727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" event={"ID":"2f8b3b48-0c03-4922-8966-a3aaca8ebce3","Type":"ContainerStarted","Data":"aafeb6a1d514a10efaf35e2bfda7c93b6512861484627867db3b0474793e9aaf"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.857129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.858149 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" event={"ID":"b03fe987-deab-47e7-829a-b822ab061f20","Type":"ContainerStarted","Data":"d402c0fcdb5d4fa8e4dca5378f965f20361ae89497cd488fc548e2f108a3585a"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.858584 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.860782 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerStarted","Data":"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.861967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" event={"ID":"c617a97c-fec4-418c-818a-250919ea6882","Type":"ContainerStarted","Data":"cdfcc0e5dcfdf02da12d2b38d9f57eb986f12e226763a3c9099d4c4a2b414c96"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.862327 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.863322 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" event={"ID":"6c7ac81b-49d3-493d-a794-1cffe78eba5e","Type":"ContainerStarted","Data":"0f95d86f74b98e4dbc2f711bf78734eb3290e39ed3c8dc88ebaa0bcbaf33fad6"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.863921 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.866424 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" event={"ID":"f44c1b55-d189-42dd-9187-90d9e0713790","Type":"ContainerStarted","Data":"246222310d651b966169e79e1962a2400c59d0f7238a0e1776396a2c08a48953"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.867099 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.868398 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" event={"ID":"009bc68d-5c70-42ca-9008-152206fd954d","Type":"ContainerStarted","Data":"a782ded10f4830efde9ad7a0b9c882ac4b232641312411ed441628ab387c01b0"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.868620 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.869779 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" event={"ID":"216a79cc-1b33-43f7-81ff-400a3b6f3d00","Type":"ContainerStarted","Data":"7316b36ec7b686d7c8bd21502a880eee1fb05f6cd26fb1f4483bd777a8ad092e"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.870090 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.870926 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" event={"ID":"224f30b2-1084-4934-8d06-67975a9776ad","Type":"ContainerStarted","Data":"6a7e603b5edb39378311215eadde71b35da21649e0a777a39515836c561a30ba"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.871165 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.872213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" event={"ID":"1661d177-41b5-4df5-886f-f3cb7abd1047","Type":"ContainerStarted","Data":"6d5e8ad129b2545b99fc7962dad50fbb4d1b509a7b94a776b6be4124cd84eafd"} Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.872367 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.922517 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" podStartSLOduration=33.34846726 podStartE2EDuration="39.922498444s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:55:06.001254898 +0000 UTC m=+985.885447614" lastFinishedPulling="2026-02-02 10:55:12.575286082 +0000 UTC m=+992.459478798" observedRunningTime="2026-02-02 10:55:13.918058617 +0000 UTC m=+993.802251333" watchObservedRunningTime="2026-02-02 10:55:13.922498444 +0000 UTC m=+993.806691160" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.926477 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" podStartSLOduration=3.961789791 podStartE2EDuration="39.926455867s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.450466643 +0000 UTC m=+956.334659359" lastFinishedPulling="2026-02-02 10:55:12.415132719 +0000 UTC m=+992.299325435" observedRunningTime="2026-02-02 10:55:13.888006797 +0000 UTC m=+993.772199513" watchObservedRunningTime="2026-02-02 10:55:13.926455867 +0000 UTC m=+993.810648583" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.939126 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" podStartSLOduration=4.187073986 podStartE2EDuration="39.939105339s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.804447801 +0000 UTC m=+956.688640527" lastFinishedPulling="2026-02-02 10:55:12.556479164 +0000 UTC m=+992.440671880" observedRunningTime="2026-02-02 10:55:13.93773109 +0000 UTC m=+993.821923806" watchObservedRunningTime="2026-02-02 10:55:13.939105339 +0000 UTC m=+993.823298055" Feb 02 10:55:13 crc kubenswrapper[4782]: I0202 10:55:13.960023 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" podStartSLOduration=34.051516251 podStartE2EDuration="40.960005207s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:55:05.473877706 +0000 UTC m=+985.358070422" lastFinishedPulling="2026-02-02 10:55:12.382366662 +0000 UTC m=+992.266559378" observedRunningTime="2026-02-02 10:55:13.956129706 +0000 UTC m=+993.840322422" watchObservedRunningTime="2026-02-02 10:55:13.960005207 +0000 UTC m=+993.844197923" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.015882 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" podStartSLOduration=3.9939968329999997 podStartE2EDuration="40.015864346s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.532500021 +0000 UTC m=+956.416692737" lastFinishedPulling="2026-02-02 10:55:12.554367534 +0000 UTC m=+992.438560250" observedRunningTime="2026-02-02 10:55:13.984071146 +0000 UTC m=+993.868263862" watchObservedRunningTime="2026-02-02 10:55:14.015864346 +0000 UTC m=+993.900057062" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.044933 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" podStartSLOduration=4.317380235 podStartE2EDuration="40.044916157s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.829069626 +0000 UTC m=+956.713262342" lastFinishedPulling="2026-02-02 10:55:12.556605548 +0000 UTC m=+992.440798264" observedRunningTime="2026-02-02 10:55:14.043397094 +0000 UTC m=+993.927589810" watchObservedRunningTime="2026-02-02 10:55:14.044916157 +0000 UTC m=+993.929108873" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.062386 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" podStartSLOduration=5.168211729 podStartE2EDuration="41.062367247s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.51988325 +0000 UTC m=+956.404075966" lastFinishedPulling="2026-02-02 10:55:12.414038768 +0000 UTC m=+992.298231484" observedRunningTime="2026-02-02 10:55:14.060019899 +0000 UTC m=+993.944212615" watchObservedRunningTime="2026-02-02 10:55:14.062367247 +0000 UTC m=+993.946559963" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.098495 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" podStartSLOduration=4.502553474 podStartE2EDuration="40.09847626s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.816664581 +0000 UTC m=+956.700857297" lastFinishedPulling="2026-02-02 10:55:12.412587367 +0000 UTC m=+992.296780083" observedRunningTime="2026-02-02 10:55:14.096213055 +0000 UTC m=+993.980405771" watchObservedRunningTime="2026-02-02 10:55:14.09847626 +0000 UTC m=+993.982668976" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.123146 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" podStartSLOduration=4.47980082 podStartE2EDuration="41.123132925s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:35.81812336 +0000 UTC m=+955.702316076" lastFinishedPulling="2026-02-02 10:55:12.461455465 +0000 UTC m=+992.345648181" observedRunningTime="2026-02-02 10:55:14.120258503 +0000 UTC m=+994.004451219" watchObservedRunningTime="2026-02-02 10:55:14.123132925 +0000 UTC m=+994.007325641" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.124838 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-5ngrn" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.136266 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-vj4sh" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.149287 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5vj4j" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.235736 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-fkwh5" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.324715 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v94dv" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.602148 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n88d6" Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.879044 4782 generic.go:334] "Generic (PLEG): container finished" podID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerID="40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7" exitCode=0 Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.880196 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerDied","Data":"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7"} Feb 02 10:55:14 crc kubenswrapper[4782]: I0202 10:55:14.907370 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-r9dkb" Feb 02 10:55:15 crc kubenswrapper[4782]: I0202 10:55:15.137242 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dmncd" Feb 02 10:55:15 crc kubenswrapper[4782]: I0202 10:55:15.221711 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-82nk8" Feb 02 10:55:15 crc kubenswrapper[4782]: I0202 10:55:15.245480 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-k7t28" Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.147665 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.766223 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.893484 4782 generic.go:334] "Generic (PLEG): container finished" podID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerID="101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6" exitCode=0 Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.893569 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerDied","Data":"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6"} Feb 02 10:55:16 crc kubenswrapper[4782]: I0202 10:55:16.897256 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerStarted","Data":"291c7cc1034e1388ada25b16a9b90b3e30e39b678bc61c7573b4a92f1ad048e6"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.550351 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b655fd757-r6hxp" Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.904084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" event={"ID":"6b276ac2-533f-43c9-94a1-f0d0e4eb6993","Type":"ContainerStarted","Data":"30326654e96f0a152cccbe55069fde8e58735bab6ad9e408b52328334f00bcd1"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.904706 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.907016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerStarted","Data":"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.909974 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerStarted","Data":"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.911565 4782 generic.go:334] "Generic (PLEG): container finished" podID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerID="c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71" exitCode=0 Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.911595 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerDied","Data":"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71"} Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.923783 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" podStartSLOduration=4.028499805 podStartE2EDuration="44.923767526s" podCreationTimestamp="2026-02-02 10:54:33 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.509834842 +0000 UTC m=+956.394027558" lastFinishedPulling="2026-02-02 10:55:17.405102563 +0000 UTC m=+997.289295279" observedRunningTime="2026-02-02 10:55:17.921306715 +0000 UTC m=+997.805499431" watchObservedRunningTime="2026-02-02 10:55:17.923767526 +0000 UTC m=+997.807960242" Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.972424 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6x8zf" podStartSLOduration=5.41910157 podStartE2EDuration="9.972402468s" podCreationTimestamp="2026-02-02 10:55:08 +0000 UTC" firstStartedPulling="2026-02-02 10:55:12.85196022 +0000 UTC m=+992.736152936" lastFinishedPulling="2026-02-02 10:55:17.405261118 +0000 UTC m=+997.289453834" observedRunningTime="2026-02-02 10:55:17.964870322 +0000 UTC m=+997.849063058" watchObservedRunningTime="2026-02-02 10:55:17.972402468 +0000 UTC m=+997.856595184" Feb 02 10:55:17 crc kubenswrapper[4782]: I0202 10:55:17.986194 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xbp4r" podStartSLOduration=20.545424826 podStartE2EDuration="30.986175582s" podCreationTimestamp="2026-02-02 10:54:47 +0000 UTC" firstStartedPulling="2026-02-02 10:55:06.761557774 +0000 UTC m=+986.645750490" lastFinishedPulling="2026-02-02 10:55:17.20230853 +0000 UTC m=+997.086501246" observedRunningTime="2026-02-02 10:55:17.984602427 +0000 UTC m=+997.868795153" watchObservedRunningTime="2026-02-02 10:55:17.986175582 +0000 UTC m=+997.870368298" Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.167875 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.168133 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:18 crc kubenswrapper[4782]: E0202 10:55:18.827161 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podUID="83a0d24e-3e0c-4d9a-b735-77c74ceec664" Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.928775 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" event={"ID":"ab3a96ec-3e51-4147-9a58-6596f2c3ad5c","Type":"ContainerStarted","Data":"da1767a8c52f1fefb3e588727a746e8770186442bc68a394e48ca0c99c63fc1a"} Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.929758 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.933885 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerStarted","Data":"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34"} Feb 02 10:55:18 crc kubenswrapper[4782]: I0202 10:55:18.954920 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" podStartSLOduration=2.889574839 podStartE2EDuration="44.954893673s" podCreationTimestamp="2026-02-02 10:54:34 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.548157029 +0000 UTC m=+956.432349755" lastFinishedPulling="2026-02-02 10:55:18.613475873 +0000 UTC m=+998.497668589" observedRunningTime="2026-02-02 10:55:18.947743138 +0000 UTC m=+998.831935864" watchObservedRunningTime="2026-02-02 10:55:18.954893673 +0000 UTC m=+998.839086389" Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.086537 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.086586 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.213983 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xbp4r" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" probeResult="failure" output=< Feb 02 10:55:19 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:55:19 crc kubenswrapper[4782]: > Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.941663 4782 generic.go:334] "Generic (PLEG): container finished" podID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerID="49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34" exitCode=0 Feb 02 10:55:19 crc kubenswrapper[4782]: I0202 10:55:19.941717 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerDied","Data":"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34"} Feb 02 10:55:20 crc kubenswrapper[4782]: I0202 10:55:20.131908 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6x8zf" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" probeResult="failure" output=< Feb 02 10:55:20 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:55:20 crc kubenswrapper[4782]: > Feb 02 10:55:20 crc kubenswrapper[4782]: I0202 10:55:20.140656 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-nsx4j" Feb 02 10:55:20 crc kubenswrapper[4782]: I0202 10:55:20.790004 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf" Feb 02 10:55:20 crc kubenswrapper[4782]: I0202 10:55:20.950020 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerStarted","Data":"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524"} Feb 02 10:55:22 crc kubenswrapper[4782]: I0202 10:55:22.951056 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:55:22 crc kubenswrapper[4782]: I0202 10:55:22.951496 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:55:23 crc kubenswrapper[4782]: I0202 10:55:23.673024 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:23 crc kubenswrapper[4782]: I0202 10:55:23.673077 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.184129 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-v7tzl" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.209195 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zphk7" podStartSLOduration=8.651528151 podStartE2EDuration="11.20915532s" podCreationTimestamp="2026-02-02 10:55:13 +0000 UTC" firstStartedPulling="2026-02-02 10:55:17.912934336 +0000 UTC m=+997.797127052" lastFinishedPulling="2026-02-02 10:55:20.470561505 +0000 UTC m=+1000.354754221" observedRunningTime="2026-02-02 10:55:20.976439542 +0000 UTC m=+1000.860632258" watchObservedRunningTime="2026-02-02 10:55:24.20915532 +0000 UTC m=+1004.093348036" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.554751 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-7z5k7" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.564780 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-w7gld" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.633583 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-scr7v" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.717887 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zphk7" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" probeResult="failure" output=< Feb 02 10:55:24 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 10:55:24 crc kubenswrapper[4782]: > Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.803736 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-l9q78" Feb 02 10:55:24 crc kubenswrapper[4782]: I0202 10:55:24.832199 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-v8zfh" Feb 02 10:55:25 crc kubenswrapper[4782]: I0202 10:55:25.145057 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-9ls2x" Feb 02 10:55:25 crc kubenswrapper[4782]: I0202 10:55:25.157354 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-xnzl4" Feb 02 10:55:25 crc kubenswrapper[4782]: I0202 10:55:25.186139 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ckl5m" Feb 02 10:55:28 crc kubenswrapper[4782]: I0202 10:55:28.210067 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:28 crc kubenswrapper[4782]: I0202 10:55:28.267580 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:28 crc kubenswrapper[4782]: I0202 10:55:28.450555 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:55:29 crc kubenswrapper[4782]: I0202 10:55:29.128268 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:29 crc kubenswrapper[4782]: I0202 10:55:29.189348 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.005179 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xbp4r" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" containerID="cri-o://a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" gracePeriod=2 Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.412845 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.615762 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") pod \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.615933 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") pod \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.616034 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") pod \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\" (UID: \"8fb4828a-ffeb-41d4-8410-c4ea114e7e61\") " Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.616607 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities" (OuterVolumeSpecName: "utilities") pod "8fb4828a-ffeb-41d4-8410-c4ea114e7e61" (UID: "8fb4828a-ffeb-41d4-8410-c4ea114e7e61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.627085 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657" (OuterVolumeSpecName: "kube-api-access-gm657") pod "8fb4828a-ffeb-41d4-8410-c4ea114e7e61" (UID: "8fb4828a-ffeb-41d4-8410-c4ea114e7e61"). InnerVolumeSpecName "kube-api-access-gm657". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.674901 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fb4828a-ffeb-41d4-8410-c4ea114e7e61" (UID: "8fb4828a-ffeb-41d4-8410-c4ea114e7e61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.717369 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.717406 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm657\" (UniqueName: \"kubernetes.io/projected/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-kube-api-access-gm657\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:30 crc kubenswrapper[4782]: I0202 10:55:30.717420 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb4828a-ffeb-41d4-8410-c4ea114e7e61-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.014189 4782 generic.go:334] "Generic (PLEG): container finished" podID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerID="a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" exitCode=0 Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.014245 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerDied","Data":"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df"} Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.015577 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xbp4r" event={"ID":"8fb4828a-ffeb-41d4-8410-c4ea114e7e61","Type":"ContainerDied","Data":"b35a4e0e6150963a20819d29e6e20270f93008d6cf7aa812ef8e1c21fd13b16f"} Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.015671 4782 scope.go:117] "RemoveContainer" containerID="a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.014273 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xbp4r" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.020817 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" event={"ID":"83a0d24e-3e0c-4d9a-b735-77c74ceec664","Type":"ContainerStarted","Data":"cdd1d0f2dafd6328a0c712865c17dfde7ee7c7b0efc30bff0931cf120f095178"} Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.038031 4782 scope.go:117] "RemoveContainer" containerID="40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.042819 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.048279 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xbp4r"] Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.056801 4782 scope.go:117] "RemoveContainer" containerID="88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.072905 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jjztq" podStartSLOduration=2.635877235 podStartE2EDuration="56.072872086s" podCreationTimestamp="2026-02-02 10:54:35 +0000 UTC" firstStartedPulling="2026-02-02 10:54:36.86868993 +0000 UTC m=+956.752882646" lastFinishedPulling="2026-02-02 10:55:30.305684771 +0000 UTC m=+1010.189877497" observedRunningTime="2026-02-02 10:55:31.068356466 +0000 UTC m=+1010.952549182" watchObservedRunningTime="2026-02-02 10:55:31.072872086 +0000 UTC m=+1010.957064802" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.096590 4782 scope.go:117] "RemoveContainer" containerID="a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" Feb 02 10:55:31 crc kubenswrapper[4782]: E0202 10:55:31.097540 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df\": container with ID starting with a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df not found: ID does not exist" containerID="a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.097601 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df"} err="failed to get container status \"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df\": rpc error: code = NotFound desc = could not find container \"a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df\": container with ID starting with a71526f86a8d9cce79844973b9def9adbe148adf98421e66c40e6b499c2794df not found: ID does not exist" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.097656 4782 scope.go:117] "RemoveContainer" containerID="40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7" Feb 02 10:55:31 crc kubenswrapper[4782]: E0202 10:55:31.098134 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7\": container with ID starting with 40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7 not found: ID does not exist" containerID="40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.098170 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7"} err="failed to get container status \"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7\": rpc error: code = NotFound desc = could not find container \"40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7\": container with ID starting with 40897fbdf21f9ff476596b752dc7b8cbaedabb3c901c63892f2339cf2669c9b7 not found: ID does not exist" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.098192 4782 scope.go:117] "RemoveContainer" containerID="88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444" Feb 02 10:55:31 crc kubenswrapper[4782]: E0202 10:55:31.098447 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444\": container with ID starting with 88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444 not found: ID does not exist" containerID="88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.098479 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444"} err="failed to get container status \"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444\": rpc error: code = NotFound desc = could not find container \"88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444\": container with ID starting with 88c58cdcd9357750b80f66879741cbc18ef54918dc5c6b2d82df9d16c8c3b444 not found: ID does not exist" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.249170 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.249390 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6x8zf" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" containerID="cri-o://cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" gracePeriod=2 Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.651796 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.830082 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") pod \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.830226 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") pod \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.830259 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") pod \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\" (UID: \"5527d0d6-41e7-42f6-bcb8-65dccddacbd4\") " Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.830956 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities" (OuterVolumeSpecName: "utilities") pod "5527d0d6-41e7-42f6-bcb8-65dccddacbd4" (UID: "5527d0d6-41e7-42f6-bcb8-65dccddacbd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.838922 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk" (OuterVolumeSpecName: "kube-api-access-xdxhk") pod "5527d0d6-41e7-42f6-bcb8-65dccddacbd4" (UID: "5527d0d6-41e7-42f6-bcb8-65dccddacbd4"). InnerVolumeSpecName "kube-api-access-xdxhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.877084 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5527d0d6-41e7-42f6-bcb8-65dccddacbd4" (UID: "5527d0d6-41e7-42f6-bcb8-65dccddacbd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.932998 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdxhk\" (UniqueName: \"kubernetes.io/projected/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-kube-api-access-xdxhk\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.933049 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:31 crc kubenswrapper[4782]: I0202 10:55:31.933065 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5527d0d6-41e7-42f6-bcb8-65dccddacbd4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029225 4782 generic.go:334] "Generic (PLEG): container finished" podID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerID="cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" exitCode=0 Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029270 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerDied","Data":"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754"} Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029279 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x8zf" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029294 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x8zf" event={"ID":"5527d0d6-41e7-42f6-bcb8-65dccddacbd4","Type":"ContainerDied","Data":"5e81cbef9e833505fd0df87090b3b8a5e200225b581ca1d4a7afe27bab6d1427"} Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.029311 4782 scope.go:117] "RemoveContainer" containerID="cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.049819 4782 scope.go:117] "RemoveContainer" containerID="101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.059594 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.066450 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6x8zf"] Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.072420 4782 scope.go:117] "RemoveContainer" containerID="02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.092356 4782 scope.go:117] "RemoveContainer" containerID="cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" Feb 02 10:55:32 crc kubenswrapper[4782]: E0202 10:55:32.093136 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754\": container with ID starting with cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754 not found: ID does not exist" containerID="cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.093173 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754"} err="failed to get container status \"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754\": rpc error: code = NotFound desc = could not find container \"cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754\": container with ID starting with cee1a210ac10cecd6c16a7e568049d509625abbf88c135dbd01444476d55f754 not found: ID does not exist" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.093202 4782 scope.go:117] "RemoveContainer" containerID="101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6" Feb 02 10:55:32 crc kubenswrapper[4782]: E0202 10:55:32.093674 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6\": container with ID starting with 101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6 not found: ID does not exist" containerID="101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.093707 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6"} err="failed to get container status \"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6\": rpc error: code = NotFound desc = could not find container \"101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6\": container with ID starting with 101a8eaeb7491b5f075b87887c50680315890fb0aa3c779efbdb7be5a1ceaba6 not found: ID does not exist" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.093725 4782 scope.go:117] "RemoveContainer" containerID="02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f" Feb 02 10:55:32 crc kubenswrapper[4782]: E0202 10:55:32.095810 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f\": container with ID starting with 02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f not found: ID does not exist" containerID="02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.095868 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f"} err="failed to get container status \"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f\": rpc error: code = NotFound desc = could not find container \"02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f\": container with ID starting with 02f0ae97c1b2468232eaf47303da42fd79b29e11acda84bdb635051c8381ae5f not found: ID does not exist" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.843073 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" path="/var/lib/kubelet/pods/5527d0d6-41e7-42f6-bcb8-65dccddacbd4/volumes" Feb 02 10:55:32 crc kubenswrapper[4782]: I0202 10:55:32.844999 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" path="/var/lib/kubelet/pods/8fb4828a-ffeb-41d4-8410-c4ea114e7e61/volumes" Feb 02 10:55:33 crc kubenswrapper[4782]: I0202 10:55:33.719173 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:33 crc kubenswrapper[4782]: I0202 10:55:33.764815 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.051785 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.052048 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zphk7" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" containerID="cri-o://778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" gracePeriod=2 Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.432818 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.594293 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") pod \"4d43af81-0992-412f-8847-e3c97ab9c5ec\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.594349 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") pod \"4d43af81-0992-412f-8847-e3c97ab9c5ec\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.594392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") pod \"4d43af81-0992-412f-8847-e3c97ab9c5ec\" (UID: \"4d43af81-0992-412f-8847-e3c97ab9c5ec\") " Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.595213 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities" (OuterVolumeSpecName: "utilities") pod "4d43af81-0992-412f-8847-e3c97ab9c5ec" (UID: "4d43af81-0992-412f-8847-e3c97ab9c5ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.604905 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694" (OuterVolumeSpecName: "kube-api-access-b7694") pod "4d43af81-0992-412f-8847-e3c97ab9c5ec" (UID: "4d43af81-0992-412f-8847-e3c97ab9c5ec"). InnerVolumeSpecName "kube-api-access-b7694". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.624687 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d43af81-0992-412f-8847-e3c97ab9c5ec" (UID: "4d43af81-0992-412f-8847-e3c97ab9c5ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.695627 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7694\" (UniqueName: \"kubernetes.io/projected/4d43af81-0992-412f-8847-e3c97ab9c5ec-kube-api-access-b7694\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.695693 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:36 crc kubenswrapper[4782]: I0202 10:55:36.695707 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d43af81-0992-412f-8847-e3c97ab9c5ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066317 4782 generic.go:334] "Generic (PLEG): container finished" podID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerID="778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" exitCode=0 Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066413 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zphk7" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066426 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerDied","Data":"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524"} Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066880 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zphk7" event={"ID":"4d43af81-0992-412f-8847-e3c97ab9c5ec","Type":"ContainerDied","Data":"291c7cc1034e1388ada25b16a9b90b3e30e39b678bc61c7573b4a92f1ad048e6"} Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.066900 4782 scope.go:117] "RemoveContainer" containerID="778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.088167 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.098191 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zphk7"] Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.104471 4782 scope.go:117] "RemoveContainer" containerID="49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.120206 4782 scope.go:117] "RemoveContainer" containerID="c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.152356 4782 scope.go:117] "RemoveContainer" containerID="778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" Feb 02 10:55:37 crc kubenswrapper[4782]: E0202 10:55:37.152873 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524\": container with ID starting with 778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524 not found: ID does not exist" containerID="778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.152911 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524"} err="failed to get container status \"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524\": rpc error: code = NotFound desc = could not find container \"778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524\": container with ID starting with 778b9fe717f6eb167695992a16b39c0449d91e39e380b1920e771a56a3649524 not found: ID does not exist" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.152940 4782 scope.go:117] "RemoveContainer" containerID="49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34" Feb 02 10:55:37 crc kubenswrapper[4782]: E0202 10:55:37.153179 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34\": container with ID starting with 49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34 not found: ID does not exist" containerID="49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.153219 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34"} err="failed to get container status \"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34\": rpc error: code = NotFound desc = could not find container \"49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34\": container with ID starting with 49f2097f3ed7955bf7c802a923649f541dd903cf1e008ace430616281654bb34 not found: ID does not exist" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.153232 4782 scope.go:117] "RemoveContainer" containerID="c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71" Feb 02 10:55:37 crc kubenswrapper[4782]: E0202 10:55:37.153616 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71\": container with ID starting with c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71 not found: ID does not exist" containerID="c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71" Feb 02 10:55:37 crc kubenswrapper[4782]: I0202 10:55:37.153695 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71"} err="failed to get container status \"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71\": rpc error: code = NotFound desc = could not find container \"c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71\": container with ID starting with c8ef06d3578d8c9cc177b7a28ca0456b84ac84309f2614e477c981f214c01a71 not found: ID does not exist" Feb 02 10:55:38 crc kubenswrapper[4782]: I0202 10:55:38.828351 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" path="/var/lib/kubelet/pods/4d43af81-0992-412f-8847-e3c97ab9c5ec/volumes" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.299940 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300618 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300630 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300658 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300683 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300695 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300701 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300710 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300717 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300730 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300736 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300743 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300749 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="extract-content" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300759 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300764 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300773 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300779 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: E0202 10:55:46.300787 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300793 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="extract-utilities" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300922 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5527d0d6-41e7-42f6-bcb8-65dccddacbd4" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300933 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d43af81-0992-412f-8847-e3c97ab9c5ec" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.300941 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb4828a-ffeb-41d4-8410-c4ea114e7e61" containerName="registry-server" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.301586 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.307418 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.307657 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.307432 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.307860 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lpsfl" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.331555 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.411742 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.415982 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.421354 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.424778 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.424860 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.473184 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526025 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526072 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526137 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526161 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.526186 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.527725 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.556363 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") pod \"dnsmasq-dns-675f4bcbfc-76smw\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.623813 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.627332 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.627415 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.627443 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.628819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.629484 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.660931 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") pod \"dnsmasq-dns-78dd6ddcc-v2zgx\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:46 crc kubenswrapper[4782]: I0202 10:55:46.755211 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:55:47 crc kubenswrapper[4782]: I0202 10:55:47.309079 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:55:47 crc kubenswrapper[4782]: W0202 10:55:47.314335 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d246705_dc07_488a_9288_59e2a16174fe.slice/crio-412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a WatchSource:0}: Error finding container 412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a: Status 404 returned error can't find the container with id 412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a Feb 02 10:55:47 crc kubenswrapper[4782]: W0202 10:55:47.372207 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod651c76ea_95cf_4ed1_80da_6731a9bcb98a.slice/crio-2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2 WatchSource:0}: Error finding container 2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2: Status 404 returned error can't find the container with id 2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2 Feb 02 10:55:47 crc kubenswrapper[4782]: I0202 10:55:47.372694 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:55:48 crc kubenswrapper[4782]: I0202 10:55:48.144758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" event={"ID":"0d246705-dc07-488a-9288-59e2a16174fe","Type":"ContainerStarted","Data":"412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a"} Feb 02 10:55:48 crc kubenswrapper[4782]: I0202 10:55:48.150733 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" event={"ID":"651c76ea-95cf-4ed1-80da-6731a9bcb98a","Type":"ContainerStarted","Data":"2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2"} Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.141515 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.174660 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.175996 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.197635 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.376920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.379271 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.379372 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.480263 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.480410 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.480430 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.481198 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.482020 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.523720 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") pod \"dnsmasq-dns-666b6646f7-s8sfp\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.571284 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.625630 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.629828 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.654543 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.805650 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.806007 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.806095 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.806125 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.907916 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.907995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.908026 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.909801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.910604 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.945188 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") pod \"dnsmasq-dns-57d769cc4f-7jkmx\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:49 crc kubenswrapper[4782]: I0202 10:55:49.955014 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.350672 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.352170 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.361801 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362021 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362046 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362120 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l8s6k" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362279 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362515 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362682 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.362817 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.506219 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519409 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519458 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519514 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519587 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.519655 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.520394 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.520432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.520498 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622114 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622739 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622786 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622848 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622908 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622936 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622957 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.623018 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.623040 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.623390 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.622699 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.624420 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.624853 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.625963 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.628317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.630793 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.631319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.631955 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.641371 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.646071 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.667968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.694595 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.700558 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.770821 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.776034 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.780798 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781050 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781117 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781197 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781237 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781470 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.781528 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fsk8v" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.821682 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929580 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929624 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929681 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929717 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929757 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929781 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929807 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929833 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:50 crc kubenswrapper[4782]: I0202 10:55:50.929853 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.031907 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032142 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032189 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032224 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032273 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032306 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032328 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032365 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032399 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032444 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.032489 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.033278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.033616 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.033838 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.034496 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.037139 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.037431 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.042446 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.047568 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.051224 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.059173 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.063068 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.089217 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.104715 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.187086 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" event={"ID":"76e79a91-7593-4b7a-bb1a-6396209cc424","Type":"ContainerStarted","Data":"bf6952e060684b89e023f4574112773aed8ccdabbd164bcea1f68ba05b888020"} Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.188076 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" event={"ID":"9f871e0b-e0d8-43a7-a251-9601cfcfd87a","Type":"ContainerStarted","Data":"b423e53936fb052435d6af130cac71bf078bb138f7f10831bf50eccca562a831"} Feb 02 10:55:51 crc kubenswrapper[4782]: I0202 10:55:51.322430 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 10:55:51 crc kubenswrapper[4782]: W0202 10:55:51.703057 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode326d5b8_cced_4bdd_858a_3d5b7f8dd2d9.slice/crio-f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43 WatchSource:0}: Error finding container f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43: Status 404 returned error can't find the container with id f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43 Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.078072 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.080151 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.083928 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.085443 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.090929 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-wbrrs" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.091992 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.095421 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.097884 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.203237 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.244319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerStarted","Data":"f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43"} Feb 02 10:55:52 crc kubenswrapper[4782]: W0202 10:55:52.254522 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02fc338c_2f8c_4e17_8d5f_7a919f4237a2.slice/crio-541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f WatchSource:0}: Error finding container 541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f: Status 404 returned error can't find the container with id 541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.267812 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phf5c\" (UniqueName: \"kubernetes.io/projected/827c472d-1762-4e1c-a096-2d48ca9af689-kube-api-access-phf5c\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.267876 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.267914 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-operator-scripts\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.267944 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-default\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.273258 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.273388 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.273430 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-kolla-config\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.273471 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-generated\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.376884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-kolla-config\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377284 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-generated\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phf5c\" (UniqueName: \"kubernetes.io/projected/827c472d-1762-4e1c-a096-2d48ca9af689-kube-api-access-phf5c\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377732 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377770 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-operator-scripts\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377797 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-default\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.377844 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.384513 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.392302 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.394401 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-operator-scripts\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.395167 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-default\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.395416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/827c472d-1762-4e1c-a096-2d48ca9af689-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.395789 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/827c472d-1762-4e1c-a096-2d48ca9af689-kolla-config\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.396027 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/827c472d-1762-4e1c-a096-2d48ca9af689-config-data-generated\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.412492 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phf5c\" (UniqueName: \"kubernetes.io/projected/827c472d-1762-4e1c-a096-2d48ca9af689-kube-api-access-phf5c\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.465788 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"827c472d-1762-4e1c-a096-2d48ca9af689\") " pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.726711 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.951214 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:55:52 crc kubenswrapper[4782]: I0202 10:55:52.951635 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.264918 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerStarted","Data":"541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f"} Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.521324 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.522988 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.528443 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.528766 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.528805 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xn4sf" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.528812 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.558600 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.700811 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.700866 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.700943 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwtw\" (UniqueName: \"kubernetes.io/projected/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kube-api-access-fmwtw\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.700974 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.701014 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.701036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.701054 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.701074 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.734717 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.735679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.741849 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.742067 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.742173 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rlffs" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.762459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803450 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwtw\" (UniqueName: \"kubernetes.io/projected/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kube-api-access-fmwtw\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803494 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803534 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803551 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803569 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803590 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803623 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.803677 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.805744 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.806152 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.806535 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.808336 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8c2fe596-a023-4206-979f-7f2e7bc81d0e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.810164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.812467 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.827038 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2fe596-a023-4206-979f-7f2e7bc81d0e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.866075 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.868962 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwtw\" (UniqueName: \"kubernetes.io/projected/8c2fe596-a023-4206-979f-7f2e7bc81d0e-kube-api-access-fmwtw\") pod \"openstack-cell1-galera-0\" (UID: \"8c2fe596-a023-4206-979f-7f2e7bc81d0e\") " pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-config-data\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904619 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904723 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904800 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-kolla-config\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:53 crc kubenswrapper[4782]: I0202 10:55:53.904845 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tc8d\" (UniqueName: \"kubernetes.io/projected/17f9dd31-25b9-4b3f-82a6-12096f36308a-kube-api-access-6tc8d\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.033835 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-config-data\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.034784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.034898 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.034958 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-kolla-config\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.034985 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tc8d\" (UniqueName: \"kubernetes.io/projected/17f9dd31-25b9-4b3f-82a6-12096f36308a-kube-api-access-6tc8d\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.037960 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-config-data\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.038452 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17f9dd31-25b9-4b3f-82a6-12096f36308a-kolla-config\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.058529 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tc8d\" (UniqueName: \"kubernetes.io/projected/17f9dd31-25b9-4b3f-82a6-12096f36308a-kube-api-access-6tc8d\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.067030 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.075415 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f9dd31-25b9-4b3f-82a6-12096f36308a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"17f9dd31-25b9-4b3f-82a6-12096f36308a\") " pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.075985 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 10:55:54 crc kubenswrapper[4782]: I0202 10:55:54.154414 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.486290 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.487415 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.491678 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-krnfc" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.500282 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.679342 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") pod \"kube-state-metrics-0\" (UID: \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.782419 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") pod \"kube-state-metrics-0\" (UID: \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:55 crc kubenswrapper[4782]: I0202 10:55:55.829008 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") pod \"kube-state-metrics-0\" (UID: \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\") " pod="openstack/kube-state-metrics-0" Feb 02 10:55:56 crc kubenswrapper[4782]: I0202 10:55:56.118416 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.791785 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.794087 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.797485 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2pvnb" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.797745 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.799303 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.799778 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.800027 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.810192 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955552 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-config\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955622 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955678 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955773 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955810 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:55:59 crc kubenswrapper[4782]: I0202 10:55:59.955836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgv9\" (UniqueName: \"kubernetes.io/projected/572fc7c8-9560-43d0-ba3e-d3f098494878-kube-api-access-wxgv9\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056705 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056764 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056786 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgv9\" (UniqueName: \"kubernetes.io/projected/572fc7c8-9560-43d0-ba3e-d3f098494878-kube-api-access-wxgv9\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056850 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-config\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056924 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.056946 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.057313 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.058092 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.058538 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-config\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.058694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/572fc7c8-9560-43d0-ba3e-d3f098494878-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.065518 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.066354 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.079007 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/572fc7c8-9560-43d0-ba3e-d3f098494878-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.079563 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgv9\" (UniqueName: \"kubernetes.io/projected/572fc7c8-9560-43d0-ba3e-d3f098494878-kube-api-access-wxgv9\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.084743 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"572fc7c8-9560-43d0-ba3e-d3f098494878\") " pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.122851 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.624014 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sv8l5"] Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.625136 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.629242 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rjr46" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.629486 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.629666 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.645578 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zs65k"] Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.659069 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5"] Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.659124 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.672470 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zs65k"] Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765004 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbh5r\" (UniqueName: \"kubernetes.io/projected/e91c0f3d-db81-453d-ad0e-30aeadb66206-kube-api-access-cbh5r\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765059 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-log\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-combined-ca-bundle\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765144 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-etc-ovs\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765175 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765202 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-run\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-ovn-controller-tls-certs\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e91c0f3d-db81-453d-ad0e-30aeadb66206-scripts\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765255 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpr5j\" (UniqueName: \"kubernetes.io/projected/b009ca1c-fc93-4724-9275-c44039256469-kube-api-access-rpr5j\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765276 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765292 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b009ca1c-fc93-4724-9275-c44039256469-scripts\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765327 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-log-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.765341 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-lib\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.866618 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-log-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.867982 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-lib\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868206 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbh5r\" (UniqueName: \"kubernetes.io/projected/e91c0f3d-db81-453d-ad0e-30aeadb66206-kube-api-access-cbh5r\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868356 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-log\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.872417 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-combined-ca-bundle\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.872657 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-etc-ovs\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.872809 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.872943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-run\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873086 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-ovn-controller-tls-certs\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873210 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e91c0f3d-db81-453d-ad0e-30aeadb66206-scripts\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873348 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpr5j\" (UniqueName: \"kubernetes.io/projected/b009ca1c-fc93-4724-9275-c44039256469-kube-api-access-rpr5j\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873468 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.873590 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b009ca1c-fc93-4724-9275-c44039256469-scripts\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.874751 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868448 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-lib\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.876660 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-etc-ovs\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868524 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-log\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.876775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e91c0f3d-db81-453d-ad0e-30aeadb66206-var-run\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.877547 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-run-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.878836 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e91c0f3d-db81-453d-ad0e-30aeadb66206-scripts\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.868579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b009ca1c-fc93-4724-9275-c44039256469-var-log-ovn\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.881883 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b009ca1c-fc93-4724-9275-c44039256469-scripts\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.890407 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-combined-ca-bundle\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.898546 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbh5r\" (UniqueName: \"kubernetes.io/projected/e91c0f3d-db81-453d-ad0e-30aeadb66206-kube-api-access-cbh5r\") pod \"ovn-controller-ovs-zs65k\" (UID: \"e91c0f3d-db81-453d-ad0e-30aeadb66206\") " pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.901251 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b009ca1c-fc93-4724-9275-c44039256469-ovn-controller-tls-certs\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.920781 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpr5j\" (UniqueName: \"kubernetes.io/projected/b009ca1c-fc93-4724-9275-c44039256469-kube-api-access-rpr5j\") pod \"ovn-controller-sv8l5\" (UID: \"b009ca1c-fc93-4724-9275-c44039256469\") " pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:00 crc kubenswrapper[4782]: I0202 10:56:00.958480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:01 crc kubenswrapper[4782]: I0202 10:56:01.004821 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.406856 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.409098 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.411102 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.411274 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.411555 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cxk52" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.411776 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.422944 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502659 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502735 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502778 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502849 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502876 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502901 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502927 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.502948 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5m45\" (UniqueName: \"kubernetes.io/projected/d8169f65-2d63-4127-8d23-ba6d56af1156-kube-api-access-j5m45\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604124 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5m45\" (UniqueName: \"kubernetes.io/projected/d8169f65-2d63-4127-8d23-ba6d56af1156-kube-api-access-j5m45\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604201 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604239 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604275 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604338 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604361 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604384 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604406 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.604847 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.605680 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.606555 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8169f65-2d63-4127-8d23-ba6d56af1156-config\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.607053 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.611520 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.626443 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.626985 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8169f65-2d63-4127-8d23-ba6d56af1156-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.634194 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5m45\" (UniqueName: \"kubernetes.io/projected/d8169f65-2d63-4127-8d23-ba6d56af1156-kube-api-access-j5m45\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.638935 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d8169f65-2d63-4127-8d23-ba6d56af1156\") " pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:02 crc kubenswrapper[4782]: I0202 10:56:02.742272 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.906036 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.906924 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cc844,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-7jkmx_openstack(76e79a91-7593-4b7a-bb1a-6396209cc424): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.908762 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" podUID="76e79a91-7593-4b7a-bb1a-6396209cc424" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.975870 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.976327 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w26jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-s8sfp_openstack(9f871e0b-e0d8-43a7-a251-9601cfcfd87a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:56:10 crc kubenswrapper[4782]: E0202 10:56:10.977707 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" podUID="9f871e0b-e0d8-43a7-a251-9601cfcfd87a" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.024248 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.024402 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbbjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-v2zgx_openstack(0d246705-dc07-488a-9288-59e2a16174fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.025302 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.025382 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nwmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-76smw_openstack(651c76ea-95cf-4ed1-80da-6731a9bcb98a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.025465 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" podUID="0d246705-dc07-488a-9288-59e2a16174fe" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.028181 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" podUID="651c76ea-95cf-4ed1-80da-6731a9bcb98a" Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.254538 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.465174 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"17f9dd31-25b9-4b3f-82a6-12096f36308a","Type":"ContainerStarted","Data":"d3d3a91444d7c138c8b22bef2c48c12dddd3e442ddd566d18bc75262fad5ffc2"} Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.469278 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" podUID="76e79a91-7593-4b7a-bb1a-6396209cc424" Feb 02 10:56:11 crc kubenswrapper[4782]: E0202 10:56:11.469356 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" podUID="9f871e0b-e0d8-43a7-a251-9601cfcfd87a" Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.550431 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.659024 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.674405 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 10:56:11 crc kubenswrapper[4782]: W0202 10:56:11.677213 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod827c472d_1762_4e1c_a096_2d48ca9af689.slice/crio-2e344634a1adfbc43816bcc49f6a623f50665c6dbb847a401edf83e7a0e92de5 WatchSource:0}: Error finding container 2e344634a1adfbc43816bcc49f6a623f50665c6dbb847a401edf83e7a0e92de5: Status 404 returned error can't find the container with id 2e344634a1adfbc43816bcc49f6a623f50665c6dbb847a401edf83e7a0e92de5 Feb 02 10:56:11 crc kubenswrapper[4782]: I0202 10:56:11.718337 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5"] Feb 02 10:56:11 crc kubenswrapper[4782]: W0202 10:56:11.889257 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb009ca1c_fc93_4724_9275_c44039256469.slice/crio-489524d2da6ddc04d5040d0036d4361995b9c1252ce0982bb9dd661ee2309c55 WatchSource:0}: Error finding container 489524d2da6ddc04d5040d0036d4361995b9c1252ce0982bb9dd661ee2309c55: Status 404 returned error can't find the container with id 489524d2da6ddc04d5040d0036d4361995b9c1252ce0982bb9dd661ee2309c55 Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.224322 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zs65k"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.306446 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.315964 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.382953 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") pod \"0d246705-dc07-488a-9288-59e2a16174fe\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383086 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") pod \"0d246705-dc07-488a-9288-59e2a16174fe\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383135 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") pod \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383248 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") pod \"0d246705-dc07-488a-9288-59e2a16174fe\" (UID: \"0d246705-dc07-488a-9288-59e2a16174fe\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") pod \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\" (UID: \"651c76ea-95cf-4ed1-80da-6731a9bcb98a\") " Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383834 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config" (OuterVolumeSpecName: "config") pod "651c76ea-95cf-4ed1-80da-6731a9bcb98a" (UID: "651c76ea-95cf-4ed1-80da-6731a9bcb98a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.383953 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config" (OuterVolumeSpecName: "config") pod "0d246705-dc07-488a-9288-59e2a16174fe" (UID: "0d246705-dc07-488a-9288-59e2a16174fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.384966 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d246705-dc07-488a-9288-59e2a16174fe" (UID: "0d246705-dc07-488a-9288-59e2a16174fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.392497 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk" (OuterVolumeSpecName: "kube-api-access-sbbjk") pod "0d246705-dc07-488a-9288-59e2a16174fe" (UID: "0d246705-dc07-488a-9288-59e2a16174fe"). InnerVolumeSpecName "kube-api-access-sbbjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.392782 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt" (OuterVolumeSpecName: "kube-api-access-2nwmt") pod "651c76ea-95cf-4ed1-80da-6731a9bcb98a" (UID: "651c76ea-95cf-4ed1-80da-6731a9bcb98a"). InnerVolumeSpecName "kube-api-access-2nwmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489166 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nwmt\" (UniqueName: \"kubernetes.io/projected/651c76ea-95cf-4ed1-80da-6731a9bcb98a-kube-api-access-2nwmt\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489217 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbbjk\" (UniqueName: \"kubernetes.io/projected/0d246705-dc07-488a-9288-59e2a16174fe-kube-api-access-sbbjk\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489233 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489255 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651c76ea-95cf-4ed1-80da-6731a9bcb98a-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.489270 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d246705-dc07-488a-9288-59e2a16174fe-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.507740 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8c2fe596-a023-4206-979f-7f2e7bc81d0e","Type":"ContainerStarted","Data":"3992ab93c4c5f2ed35c729b5437440045e9c79bb0c0183ea1413c7dbb7ffcc51"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.511297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" event={"ID":"651c76ea-95cf-4ed1-80da-6731a9bcb98a","Type":"ContainerDied","Data":"2af263505f6a8540ba9b3c383f8f251e1fadb60252e16c35129f3e7f4ddc31b2"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.511389 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-76smw" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.513398 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1","Type":"ContainerStarted","Data":"b5731da46b9909f62f299535fa86ed29a8dd25ea43d89f4988425d732dfa7580"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.515933 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.515940 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v2zgx" event={"ID":"0d246705-dc07-488a-9288-59e2a16174fe","Type":"ContainerDied","Data":"412d8f851c0082d1b10d1b94b76e682711a37dfb601edf6f9abba0cae9aa133a"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.517562 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"827c472d-1762-4e1c-a096-2d48ca9af689","Type":"ContainerStarted","Data":"2e344634a1adfbc43816bcc49f6a623f50665c6dbb847a401edf83e7a0e92de5"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.520961 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerStarted","Data":"9290020d34f1ea847ba07bed6da2f15ea2cbcad2ac62e51ff19f357df22d87c2"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.523670 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5" event={"ID":"b009ca1c-fc93-4724-9275-c44039256469","Type":"ContainerStarted","Data":"489524d2da6ddc04d5040d0036d4361995b9c1252ce0982bb9dd661ee2309c55"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.527062 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerStarted","Data":"4b22530b4335201f0edeaaeb102aa0e0c1fe781965be9a91cd8a38308cd04cdb"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.538690 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerStarted","Data":"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb"} Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.636369 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.644957 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v2zgx"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.678794 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.688512 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-76smw"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.795475 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.838238 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d246705-dc07-488a-9288-59e2a16174fe" path="/var/lib/kubelet/pods/0d246705-dc07-488a-9288-59e2a16174fe/volumes" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.838748 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651c76ea-95cf-4ed1-80da-6731a9bcb98a" path="/var/lib/kubelet/pods/651c76ea-95cf-4ed1-80da-6731a9bcb98a/volumes" Feb 02 10:56:12 crc kubenswrapper[4782]: I0202 10:56:12.939937 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 10:56:13 crc kubenswrapper[4782]: I0202 10:56:13.548807 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8169f65-2d63-4127-8d23-ba6d56af1156","Type":"ContainerStarted","Data":"bab9c7bfb4ad70cda552743322558a16070e39f3e35163793fe9f227b8b8e3a4"} Feb 02 10:56:14 crc kubenswrapper[4782]: I0202 10:56:14.560084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"572fc7c8-9560-43d0-ba3e-d3f098494878","Type":"ContainerStarted","Data":"696cfa0f58b5075b291ae45620150f0197e5ce439872a66a1cfa55f9e45f7d13"} Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.391927 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kv4h8"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.393285 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.396561 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.408054 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kv4h8"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.484820 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-config\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.484872 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovn-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.484944 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlndm\" (UniqueName: \"kubernetes.io/projected/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-kube-api-access-vlndm\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.485111 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.485155 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-combined-ca-bundle\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.485241 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovs-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590333 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlndm\" (UniqueName: \"kubernetes.io/projected/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-kube-api-access-vlndm\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590739 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590769 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-combined-ca-bundle\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovs-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-config\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.590946 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovn-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.591243 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovn-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.591249 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-ovs-rundir\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.592059 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-config\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.600231 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.613411 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlndm\" (UniqueName: \"kubernetes.io/projected/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-kube-api-access-vlndm\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.624788 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9cb1af6-ff01-4474-ad02-56938ef7e5a1-combined-ca-bundle\") pod \"ovn-controller-metrics-kv4h8\" (UID: \"c9cb1af6-ff01-4474-ad02-56938ef7e5a1\") " pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.687631 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.718732 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kv4h8" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.735242 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.742959 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.746580 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.748609 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.895317 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.895440 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.896086 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.896237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.986767 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:56:17 crc kubenswrapper[4782]: I0202 10:56:17.998171 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.003902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.004040 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.004147 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.006948 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.007698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.021688 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.022817 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.033666 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.034880 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.046677 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-nmlpl\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.111848 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.112273 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.126220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.126318 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.126373 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.126432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.136571 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228054 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228111 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228154 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228216 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.228260 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.229167 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.229785 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.233929 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.234525 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.263264 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") pod \"dnsmasq-dns-86db49b7ff-tfpvt\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:18 crc kubenswrapper[4782]: I0202 10:56:18.469077 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.125231 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.140056 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260672 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") pod \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260755 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") pod \"76e79a91-7593-4b7a-bb1a-6396209cc424\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260835 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") pod \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260860 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") pod \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\" (UID: \"9f871e0b-e0d8-43a7-a251-9601cfcfd87a\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.260995 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") pod \"76e79a91-7593-4b7a-bb1a-6396209cc424\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.261087 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") pod \"76e79a91-7593-4b7a-bb1a-6396209cc424\" (UID: \"76e79a91-7593-4b7a-bb1a-6396209cc424\") " Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.261580 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config" (OuterVolumeSpecName: "config") pod "9f871e0b-e0d8-43a7-a251-9601cfcfd87a" (UID: "9f871e0b-e0d8-43a7-a251-9601cfcfd87a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.261955 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config" (OuterVolumeSpecName: "config") pod "76e79a91-7593-4b7a-bb1a-6396209cc424" (UID: "76e79a91-7593-4b7a-bb1a-6396209cc424"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.262524 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76e79a91-7593-4b7a-bb1a-6396209cc424" (UID: "76e79a91-7593-4b7a-bb1a-6396209cc424"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.262536 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f871e0b-e0d8-43a7-a251-9601cfcfd87a" (UID: "9f871e0b-e0d8-43a7-a251-9601cfcfd87a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.264596 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb" (OuterVolumeSpecName: "kube-api-access-w26jb") pod "9f871e0b-e0d8-43a7-a251-9601cfcfd87a" (UID: "9f871e0b-e0d8-43a7-a251-9601cfcfd87a"). InnerVolumeSpecName "kube-api-access-w26jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.270127 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844" (OuterVolumeSpecName: "kube-api-access-cc844") pod "76e79a91-7593-4b7a-bb1a-6396209cc424" (UID: "76e79a91-7593-4b7a-bb1a-6396209cc424"). InnerVolumeSpecName "kube-api-access-cc844". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364191 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc844\" (UniqueName: \"kubernetes.io/projected/76e79a91-7593-4b7a-bb1a-6396209cc424-kube-api-access-cc844\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364240 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364255 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e79a91-7593-4b7a-bb1a-6396209cc424-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364267 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w26jb\" (UniqueName: \"kubernetes.io/projected/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-kube-api-access-w26jb\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364278 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.364288 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f871e0b-e0d8-43a7-a251-9601cfcfd87a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.616297 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" event={"ID":"76e79a91-7593-4b7a-bb1a-6396209cc424","Type":"ContainerDied","Data":"bf6952e060684b89e023f4574112773aed8ccdabbd164bcea1f68ba05b888020"} Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.616313 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7jkmx" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.617706 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" event={"ID":"9f871e0b-e0d8-43a7-a251-9601cfcfd87a","Type":"ContainerDied","Data":"b423e53936fb052435d6af130cac71bf078bb138f7f10831bf50eccca562a831"} Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.617762 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s8sfp" Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.673207 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.681847 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7jkmx"] Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.725475 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:56:19 crc kubenswrapper[4782]: I0202 10:56:19.734034 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s8sfp"] Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.456024 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kv4h8"] Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.636420 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kv4h8" event={"ID":"c9cb1af6-ff01-4474-ad02-56938ef7e5a1","Type":"ContainerStarted","Data":"036122d3db469764f7b8ba7026aed4fe368a51bf34e6548f62c28d3e215e45c0"} Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.780304 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.841819 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e79a91-7593-4b7a-bb1a-6396209cc424" path="/var/lib/kubelet/pods/76e79a91-7593-4b7a-bb1a-6396209cc424/volumes" Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.842261 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f871e0b-e0d8-43a7-a251-9601cfcfd87a" path="/var/lib/kubelet/pods/9f871e0b-e0d8-43a7-a251-9601cfcfd87a/volumes" Feb 02 10:56:20 crc kubenswrapper[4782]: I0202 10:56:20.923199 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.667110 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"827c472d-1762-4e1c-a096-2d48ca9af689","Type":"ContainerStarted","Data":"9396400999cf874396d9e7d1690f8cf7ae7be01918ca48a4abec931d8d010c1c"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.683611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8c2fe596-a023-4206-979f-7f2e7bc81d0e","Type":"ContainerStarted","Data":"8ea80d7a6697910033180c13651a4f1a0db2ca17330d6dc67d29ea3d1d0e4318"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.703986 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerStarted","Data":"1064d4e5f9b67709bda7004f004b01a82a6da13999d6afdf338b696d7bf74ef8"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.710345 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerStarted","Data":"cb78cf6d16099541f3ebf963695779d122ec9f7aca28fae1d462165f83b98ae3"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.717705 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerStarted","Data":"70e38c18e7eeadf50540fc012954a93846a0fd6be83565a64c6ee300da9f11db"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.719106 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"17f9dd31-25b9-4b3f-82a6-12096f36308a","Type":"ContainerStarted","Data":"9df6fc7083c621ea0ea773e2ff44d99fc6615fb81d0f78c013aa7fc82dfa4a55"} Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.719678 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 10:56:21 crc kubenswrapper[4782]: I0202 10:56:21.779408 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.586281823 podStartE2EDuration="28.77938771s" podCreationTimestamp="2026-02-02 10:55:53 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.263790542 +0000 UTC m=+1051.147983258" lastFinishedPulling="2026-02-02 10:56:19.456896429 +0000 UTC m=+1059.341089145" observedRunningTime="2026-02-02 10:56:21.773491421 +0000 UTC m=+1061.657684137" watchObservedRunningTime="2026-02-02 10:56:21.77938771 +0000 UTC m=+1061.663580426" Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.731844 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5" event={"ID":"b009ca1c-fc93-4724-9275-c44039256469","Type":"ContainerStarted","Data":"7d7132de1e2ab4b5ab034ed6f9ed17821d273b1a3b5cfaeb17d0ac4ade0c26ba"} Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.733178 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sv8l5" Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.735507 4782 generic.go:334] "Generic (PLEG): container finished" podID="e91c0f3d-db81-453d-ad0e-30aeadb66206" containerID="1064d4e5f9b67709bda7004f004b01a82a6da13999d6afdf338b696d7bf74ef8" exitCode=0 Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.735591 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerDied","Data":"1064d4e5f9b67709bda7004f004b01a82a6da13999d6afdf338b696d7bf74ef8"} Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.750900 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8169f65-2d63-4127-8d23-ba6d56af1156","Type":"ContainerStarted","Data":"7eaffd15efb32899894661b48c4a227c87b82d3b2a195818ee62a977ae85d95a"} Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.768146 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sv8l5" podStartSLOduration=14.106884231 podStartE2EDuration="22.768119054s" podCreationTimestamp="2026-02-02 10:56:00 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.891617698 +0000 UTC m=+1051.775810414" lastFinishedPulling="2026-02-02 10:56:20.552852531 +0000 UTC m=+1060.437045237" observedRunningTime="2026-02-02 10:56:22.754405502 +0000 UTC m=+1062.638598218" watchObservedRunningTime="2026-02-02 10:56:22.768119054 +0000 UTC m=+1062.652311770" Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.951786 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.952116 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:56:22 crc kubenswrapper[4782]: I0202 10:56:22.952162 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:56:23 crc kubenswrapper[4782]: I0202 10:56:23.758230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"572fc7c8-9560-43d0-ba3e-d3f098494878","Type":"ContainerStarted","Data":"d48966c114b9884cfd8feccb0d76128e705dc668d4df7b9b7bc5eca7672d33b8"} Feb 02 10:56:23 crc kubenswrapper[4782]: I0202 10:56:23.758842 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:56:23 crc kubenswrapper[4782]: I0202 10:56:23.758904 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1" gracePeriod=600 Feb 02 10:56:24 crc kubenswrapper[4782]: I0202 10:56:24.767088 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1" exitCode=0 Feb 02 10:56:24 crc kubenswrapper[4782]: I0202 10:56:24.767146 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1"} Feb 02 10:56:24 crc kubenswrapper[4782]: I0202 10:56:24.767423 4782 scope.go:117] "RemoveContainer" containerID="2dc043efe5736739c3acc8fe9716ce3a52d3c218a415682bfde40984fdbbbf0c" Feb 02 10:56:27 crc kubenswrapper[4782]: I0202 10:56:27.791013 4782 generic.go:334] "Generic (PLEG): container finished" podID="8c2fe596-a023-4206-979f-7f2e7bc81d0e" containerID="8ea80d7a6697910033180c13651a4f1a0db2ca17330d6dc67d29ea3d1d0e4318" exitCode=0 Feb 02 10:56:27 crc kubenswrapper[4782]: I0202 10:56:27.792209 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8c2fe596-a023-4206-979f-7f2e7bc81d0e","Type":"ContainerDied","Data":"8ea80d7a6697910033180c13651a4f1a0db2ca17330d6dc67d29ea3d1d0e4318"} Feb 02 10:56:28 crc kubenswrapper[4782]: I0202 10:56:28.810910 4782 generic.go:334] "Generic (PLEG): container finished" podID="827c472d-1762-4e1c-a096-2d48ca9af689" containerID="9396400999cf874396d9e7d1690f8cf7ae7be01918ca48a4abec931d8d010c1c" exitCode=0 Feb 02 10:56:28 crc kubenswrapper[4782]: I0202 10:56:28.810994 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"827c472d-1762-4e1c-a096-2d48ca9af689","Type":"ContainerDied","Data":"9396400999cf874396d9e7d1690f8cf7ae7be01918ca48a4abec931d8d010c1c"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.079882 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.820509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kv4h8" event={"ID":"c9cb1af6-ff01-4474-ad02-56938ef7e5a1","Type":"ContainerStarted","Data":"f9bb541b87910adc8791c6046b89a907b564a12ce96dc3136d94ec69254fdedb"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.823538 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"572fc7c8-9560-43d0-ba3e-d3f098494878","Type":"ContainerStarted","Data":"c57bb4ab461e6a4f9be2fd4f5dff275a79d222ed1d1f9ff74b11d79362a597db"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.826320 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1","Type":"ContainerStarted","Data":"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.826521 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.829670 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8c2fe596-a023-4206-979f-7f2e7bc81d0e","Type":"ContainerStarted","Data":"ba406c9b1ae0b8574bd18cda4c3b424f6233c669e45d79dc02c388bf5c2b83f4"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.832529 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerStarted","Data":"156a0ea548ccaf29a093e66575d64025d2c15fe46cf273442ca339bb25a93f67"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.832566 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zs65k" event={"ID":"e91c0f3d-db81-453d-ad0e-30aeadb66206","Type":"ContainerStarted","Data":"bb6745b2ce15f1adee676d611a3588d4088747b1ea6177cfe6e1b1770e3c84e9"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.832603 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.832894 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.835738 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerID="4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5" exitCode=0 Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.835804 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerDied","Data":"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.851398 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.854237 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kv4h8" podStartSLOduration=4.811381827 podStartE2EDuration="12.854215102s" podCreationTimestamp="2026-02-02 10:56:17 +0000 UTC" firstStartedPulling="2026-02-02 10:56:20.576336854 +0000 UTC m=+1060.460529570" lastFinishedPulling="2026-02-02 10:56:28.619170109 +0000 UTC m=+1068.503362845" observedRunningTime="2026-02-02 10:56:29.840798558 +0000 UTC m=+1069.724991274" watchObservedRunningTime="2026-02-02 10:56:29.854215102 +0000 UTC m=+1069.738407818" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.857802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"827c472d-1762-4e1c-a096-2d48ca9af689","Type":"ContainerStarted","Data":"60a5c5ce3a9d7c529f9f2e9eb361ccf1b38f85509256a325a169fac131c09375"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.862415 4782 generic.go:334] "Generic (PLEG): container finished" podID="ede109fe-b194-4a02-992d-f1132849fc0d" containerID="ab258d10ba96d70b4cfb3ed122e2e498a9dd3427d5d89bbbbf656b5efa7359c1" exitCode=0 Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.862721 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerDied","Data":"ab258d10ba96d70b4cfb3ed122e2e498a9dd3427d5d89bbbbf656b5efa7359c1"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.877412 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zs65k" podStartSLOduration=22.237304784 podStartE2EDuration="29.877391395s" podCreationTimestamp="2026-02-02 10:56:00 +0000 UTC" firstStartedPulling="2026-02-02 10:56:12.248309956 +0000 UTC m=+1052.132502672" lastFinishedPulling="2026-02-02 10:56:19.888396567 +0000 UTC m=+1059.772589283" observedRunningTime="2026-02-02 10:56:29.871817576 +0000 UTC m=+1069.756010292" watchObservedRunningTime="2026-02-02 10:56:29.877391395 +0000 UTC m=+1069.761584111" Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.885029 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d8169f65-2d63-4127-8d23-ba6d56af1156","Type":"ContainerStarted","Data":"824c87b9036397c463fb697081b4e9a2bf392cdba163c52e8dfd267e828e7f40"} Feb 02 10:56:29 crc kubenswrapper[4782]: I0202 10:56:29.911906 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.138690052 podStartE2EDuration="37.911889813s" podCreationTimestamp="2026-02-02 10:55:52 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.706545022 +0000 UTC m=+1051.590737738" lastFinishedPulling="2026-02-02 10:56:19.479744783 +0000 UTC m=+1059.363937499" observedRunningTime="2026-02-02 10:56:29.899753495 +0000 UTC m=+1069.783946211" watchObservedRunningTime="2026-02-02 10:56:29.911889813 +0000 UTC m=+1069.796082529" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.013324 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.450336777 podStartE2EDuration="32.013308855s" podCreationTimestamp="2026-02-02 10:55:58 +0000 UTC" firstStartedPulling="2026-02-02 10:56:14.063742087 +0000 UTC m=+1053.947934803" lastFinishedPulling="2026-02-02 10:56:28.626714165 +0000 UTC m=+1068.510906881" observedRunningTime="2026-02-02 10:56:30.010902206 +0000 UTC m=+1069.895094922" watchObservedRunningTime="2026-02-02 10:56:30.013308855 +0000 UTC m=+1069.897501571" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.067320 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.163357071 podStartE2EDuration="35.06730217s" podCreationTimestamp="2026-02-02 10:55:55 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.697887594 +0000 UTC m=+1051.582080310" lastFinishedPulling="2026-02-02 10:56:28.601832693 +0000 UTC m=+1068.486025409" observedRunningTime="2026-02-02 10:56:30.055929484 +0000 UTC m=+1069.940122200" watchObservedRunningTime="2026-02-02 10:56:30.06730217 +0000 UTC m=+1069.951494886" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.124899 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.124940 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.178012 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.295693368 podStartE2EDuration="39.177988927s" podCreationTimestamp="2026-02-02 10:55:51 +0000 UTC" firstStartedPulling="2026-02-02 10:56:11.697559725 +0000 UTC m=+1051.581752441" lastFinishedPulling="2026-02-02 10:56:20.579855264 +0000 UTC m=+1060.464048000" observedRunningTime="2026-02-02 10:56:30.120983716 +0000 UTC m=+1070.005176432" watchObservedRunningTime="2026-02-02 10:56:30.177988927 +0000 UTC m=+1070.062181643" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.183688 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.363411903 podStartE2EDuration="29.18366544s" podCreationTimestamp="2026-02-02 10:56:01 +0000 UTC" firstStartedPulling="2026-02-02 10:56:12.829902659 +0000 UTC m=+1052.714095375" lastFinishedPulling="2026-02-02 10:56:28.650156196 +0000 UTC m=+1068.534348912" observedRunningTime="2026-02-02 10:56:30.176901576 +0000 UTC m=+1070.061094282" watchObservedRunningTime="2026-02-02 10:56:30.18366544 +0000 UTC m=+1070.067858166" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.470081 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.892819 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerStarted","Data":"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308"} Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.893751 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.896278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerStarted","Data":"79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a"} Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.896310 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.920375 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" podStartSLOduration=7.777998548 podStartE2EDuration="13.920357661s" podCreationTimestamp="2026-02-02 10:56:17 +0000 UTC" firstStartedPulling="2026-02-02 10:56:21.575808454 +0000 UTC m=+1061.460001180" lastFinishedPulling="2026-02-02 10:56:27.718167577 +0000 UTC m=+1067.602360293" observedRunningTime="2026-02-02 10:56:30.910835789 +0000 UTC m=+1070.795028505" watchObservedRunningTime="2026-02-02 10:56:30.920357661 +0000 UTC m=+1070.804550387" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.932759 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" podStartSLOduration=12.209303698 podStartE2EDuration="13.932742116s" podCreationTimestamp="2026-02-02 10:56:17 +0000 UTC" firstStartedPulling="2026-02-02 10:56:20.823653021 +0000 UTC m=+1060.707845737" lastFinishedPulling="2026-02-02 10:56:22.547091439 +0000 UTC m=+1062.431284155" observedRunningTime="2026-02-02 10:56:30.928945047 +0000 UTC m=+1070.813137763" watchObservedRunningTime="2026-02-02 10:56:30.932742116 +0000 UTC m=+1070.816934832" Feb 02 10:56:30 crc kubenswrapper[4782]: I0202 10:56:30.944667 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 10:56:32 crc kubenswrapper[4782]: E0202 10:56:32.401906 4782 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:35332->38.102.83.147:40373: write tcp 38.102.83.147:35332->38.102.83.147:40373: write: broken pipe Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.727965 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.728043 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.742745 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.742787 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.786626 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:32 crc kubenswrapper[4782]: I0202 10:56:32.967894 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.134165 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.135592 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.139776 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.139969 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.142970 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mvq24" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.183811 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.191869 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214487 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-config\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214553 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214609 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghs5\" (UniqueName: \"kubernetes.io/projected/7a65af67-822b-44b8-a2be-a132de866a2e-kube-api-access-xghs5\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214628 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214676 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214728 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-scripts\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.214761 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316395 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316454 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-config\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316572 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xghs5\" (UniqueName: \"kubernetes.io/projected/7a65af67-822b-44b8-a2be-a132de866a2e-kube-api-access-xghs5\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316637 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.316729 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-scripts\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.318018 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-config\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.318146 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a65af67-822b-44b8-a2be-a132de866a2e-scripts\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.318597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.328309 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.329536 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.347402 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a65af67-822b-44b8-a2be-a132de866a2e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.372460 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xghs5\" (UniqueName: \"kubernetes.io/projected/7a65af67-822b-44b8-a2be-a132de866a2e-kube-api-access-xghs5\") pod \"ovn-northd-0\" (UID: \"7a65af67-822b-44b8-a2be-a132de866a2e\") " pod="openstack/ovn-northd-0" Feb 02 10:56:33 crc kubenswrapper[4782]: I0202 10:56:33.455042 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 10:56:34 crc kubenswrapper[4782]: I0202 10:56:34.120650 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 10:56:34 crc kubenswrapper[4782]: W0202 10:56:34.135036 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a65af67_822b_44b8_a2be_a132de866a2e.slice/crio-0c87f29c2ea77b93f228a7a7b9efd267579a68755a99e8103d87a1855c896fa4 WatchSource:0}: Error finding container 0c87f29c2ea77b93f228a7a7b9efd267579a68755a99e8103d87a1855c896fa4: Status 404 returned error can't find the container with id 0c87f29c2ea77b93f228a7a7b9efd267579a68755a99e8103d87a1855c896fa4 Feb 02 10:56:34 crc kubenswrapper[4782]: I0202 10:56:34.156438 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 10:56:34 crc kubenswrapper[4782]: I0202 10:56:34.156499 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 10:56:34 crc kubenswrapper[4782]: I0202 10:56:34.927202 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7a65af67-822b-44b8-a2be-a132de866a2e","Type":"ContainerStarted","Data":"0c87f29c2ea77b93f228a7a7b9efd267579a68755a99e8103d87a1855c896fa4"} Feb 02 10:56:35 crc kubenswrapper[4782]: I0202 10:56:35.168468 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 10:56:35 crc kubenswrapper[4782]: I0202 10:56:35.267140 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.125402 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.753811 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.863536 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.961895 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7a65af67-822b-44b8-a2be-a132de866a2e","Type":"ContainerStarted","Data":"41fb462aa96c1db4145f05bb66a1ee1ffadd4a3ae7ec4774cfcdb2c7f533869e"} Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.961960 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7a65af67-822b-44b8-a2be-a132de866a2e","Type":"ContainerStarted","Data":"9ffa8fd5ae6a1851ac83572411be2d24ad12f2d84ad32dd2ea40289a0010c720"} Feb 02 10:56:36 crc kubenswrapper[4782]: I0202 10:56:36.986190 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.208092221 podStartE2EDuration="3.986173532s" podCreationTimestamp="2026-02-02 10:56:33 +0000 UTC" firstStartedPulling="2026-02-02 10:56:34.137589837 +0000 UTC m=+1074.021782553" lastFinishedPulling="2026-02-02 10:56:35.915671148 +0000 UTC m=+1075.799863864" observedRunningTime="2026-02-02 10:56:36.983477685 +0000 UTC m=+1076.867670401" watchObservedRunningTime="2026-02-02 10:56:36.986173532 +0000 UTC m=+1076.870366248" Feb 02 10:56:37 crc kubenswrapper[4782]: I0202 10:56:37.969257 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 10:56:38 crc kubenswrapper[4782]: I0202 10:56:38.114862 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:38 crc kubenswrapper[4782]: I0202 10:56:38.470773 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:56:38 crc kubenswrapper[4782]: I0202 10:56:38.527459 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:38 crc kubenswrapper[4782]: I0202 10:56:38.981376 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="dnsmasq-dns" containerID="cri-o://31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" gracePeriod=10 Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.376364 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.381657 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.388288 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.403590 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.405812 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.421844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.422172 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.425878 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.498282 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.523378 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.523462 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.523513 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.523562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.524416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.567585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") pod \"glance-377c-account-create-update-4zm4s\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.624819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.625316 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.627044 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.653589 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") pod \"glance-db-create-l6d9n\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.701762 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.738408 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.797201 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.831372 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") pod \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.831447 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") pod \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.831496 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") pod \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.831660 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") pod \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\" (UID: \"ce7bfaff-9623-45e1-a146-6ea2e85691b8\") " Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.851109 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d" (OuterVolumeSpecName: "kube-api-access-8x95d") pod "ce7bfaff-9623-45e1-a146-6ea2e85691b8" (UID: "ce7bfaff-9623-45e1-a146-6ea2e85691b8"). InnerVolumeSpecName "kube-api-access-8x95d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.904026 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config" (OuterVolumeSpecName: "config") pod "ce7bfaff-9623-45e1-a146-6ea2e85691b8" (UID: "ce7bfaff-9623-45e1-a146-6ea2e85691b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.904968 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce7bfaff-9623-45e1-a146-6ea2e85691b8" (UID: "ce7bfaff-9623-45e1-a146-6ea2e85691b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.910695 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce7bfaff-9623-45e1-a146-6ea2e85691b8" (UID: "ce7bfaff-9623-45e1-a146-6ea2e85691b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.933937 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.933982 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.933994 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce7bfaff-9623-45e1-a146-6ea2e85691b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.934007 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x95d\" (UniqueName: \"kubernetes.io/projected/ce7bfaff-9623-45e1-a146-6ea2e85691b8-kube-api-access-8x95d\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988550 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerID="31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" exitCode=0 Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerDied","Data":"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308"} Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988608 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" event={"ID":"ce7bfaff-9623-45e1-a146-6ea2e85691b8","Type":"ContainerDied","Data":"cb78cf6d16099541f3ebf963695779d122ec9f7aca28fae1d462165f83b98ae3"} Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988626 4782 scope.go:117] "RemoveContainer" containerID="31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" Feb 02 10:56:39 crc kubenswrapper[4782]: I0202 10:56:39.988966 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-nmlpl" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.020486 4782 scope.go:117] "RemoveContainer" containerID="4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.030515 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.039167 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-nmlpl"] Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.043315 4782 scope.go:117] "RemoveContainer" containerID="31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" Feb 02 10:56:40 crc kubenswrapper[4782]: E0202 10:56:40.043790 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308\": container with ID starting with 31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308 not found: ID does not exist" containerID="31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.043828 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308"} err="failed to get container status \"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308\": rpc error: code = NotFound desc = could not find container \"31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308\": container with ID starting with 31576d51c9273329fb90622f340af26e410e6884093bd7d0731eb58f619ee308 not found: ID does not exist" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.043852 4782 scope.go:117] "RemoveContainer" containerID="4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5" Feb 02 10:56:40 crc kubenswrapper[4782]: E0202 10:56:40.044222 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5\": container with ID starting with 4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5 not found: ID does not exist" containerID="4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.044246 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5"} err="failed to get container status \"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5\": rpc error: code = NotFound desc = could not find container \"4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5\": container with ID starting with 4d742499f4f00577342e456e85946aa923f91edb7a9d4ea740da3a31b21f81d5 not found: ID does not exist" Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.250085 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.346087 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 10:56:40 crc kubenswrapper[4782]: I0202 10:56:40.833172 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" path="/var/lib/kubelet/pods/ce7bfaff-9623-45e1-a146-6ea2e85691b8/volumes" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.011518 4782 generic.go:334] "Generic (PLEG): container finished" podID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" containerID="922a5052c537ca60debaeb30c310ad62b9d6cc2296c5f5cb93deeef6d784a0c2" exitCode=0 Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.011589 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-377c-account-create-update-4zm4s" event={"ID":"bfde9ba3-fda5-496b-8ee5-52430e61f02a","Type":"ContainerDied","Data":"922a5052c537ca60debaeb30c310ad62b9d6cc2296c5f5cb93deeef6d784a0c2"} Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.011617 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-377c-account-create-update-4zm4s" event={"ID":"bfde9ba3-fda5-496b-8ee5-52430e61f02a","Type":"ContainerStarted","Data":"6fa69f75d3ab61cbf3d27efd1496c902da7a79fa8b733a929d055ffcecfa49f8"} Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.014969 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce57fffc-4d75-495f-b7ed-28676054f90e" containerID="2a0dfecd12eefed7e04fa4bd8706afbc2a21a95326fc9fb0c721694048febe14" exitCode=0 Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.015191 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l6d9n" event={"ID":"ce57fffc-4d75-495f-b7ed-28676054f90e","Type":"ContainerDied","Data":"2a0dfecd12eefed7e04fa4bd8706afbc2a21a95326fc9fb0c721694048febe14"} Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.015246 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l6d9n" event={"ID":"ce57fffc-4d75-495f-b7ed-28676054f90e","Type":"ContainerStarted","Data":"4ee47379cf84f1e497897eaa6502d9f20f12d223c36ef2bb6fab28916234cc24"} Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.022765 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:41 crc kubenswrapper[4782]: E0202 10:56:41.023272 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="dnsmasq-dns" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.023337 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="dnsmasq-dns" Feb 02 10:56:41 crc kubenswrapper[4782]: E0202 10:56:41.023406 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="init" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.023478 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="init" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.023717 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7bfaff-9623-45e1-a146-6ea2e85691b8" containerName="dnsmasq-dns" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.024924 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.032126 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.034148 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.164443 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.164545 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.266167 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.266242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.267015 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.286833 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") pod \"root-account-create-update-4d24k\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.359431 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:41 crc kubenswrapper[4782]: I0202 10:56:41.818384 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:41 crc kubenswrapper[4782]: W0202 10:56:41.823020 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ee52cc_7cc9_46d3_aed7_67cdc48551c7.slice/crio-5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43 WatchSource:0}: Error finding container 5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43: Status 404 returned error can't find the container with id 5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43 Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.024626 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4d24k" event={"ID":"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7","Type":"ContainerStarted","Data":"ce6913cbdfadb84393c08d16197643efcccd14ec7c86e1016dba2acac54b37e6"} Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.025237 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4d24k" event={"ID":"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7","Type":"ContainerStarted","Data":"5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43"} Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.051151 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4d24k" podStartSLOduration=2.051132633 podStartE2EDuration="2.051132633s" podCreationTimestamp="2026-02-02 10:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:42.044510084 +0000 UTC m=+1081.928702800" watchObservedRunningTime="2026-02-02 10:56:42.051132633 +0000 UTC m=+1081.935325349" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.414403 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.426967 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.483731 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") pod \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.483886 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") pod \"ce57fffc-4d75-495f-b7ed-28676054f90e\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.483922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") pod \"ce57fffc-4d75-495f-b7ed-28676054f90e\" (UID: \"ce57fffc-4d75-495f-b7ed-28676054f90e\") " Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.483979 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") pod \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\" (UID: \"bfde9ba3-fda5-496b-8ee5-52430e61f02a\") " Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.484891 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfde9ba3-fda5-496b-8ee5-52430e61f02a" (UID: "bfde9ba3-fda5-496b-8ee5-52430e61f02a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.484949 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce57fffc-4d75-495f-b7ed-28676054f90e" (UID: "ce57fffc-4d75-495f-b7ed-28676054f90e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.489761 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd" (OuterVolumeSpecName: "kube-api-access-cqnpd") pod "bfde9ba3-fda5-496b-8ee5-52430e61f02a" (UID: "bfde9ba3-fda5-496b-8ee5-52430e61f02a"). InnerVolumeSpecName "kube-api-access-cqnpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.506447 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l" (OuterVolumeSpecName: "kube-api-access-hvg7l") pod "ce57fffc-4d75-495f-b7ed-28676054f90e" (UID: "ce57fffc-4d75-495f-b7ed-28676054f90e"). InnerVolumeSpecName "kube-api-access-hvg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.585989 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqnpd\" (UniqueName: \"kubernetes.io/projected/bfde9ba3-fda5-496b-8ee5-52430e61f02a-kube-api-access-cqnpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.586024 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfde9ba3-fda5-496b-8ee5-52430e61f02a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.586036 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce57fffc-4d75-495f-b7ed-28676054f90e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:42 crc kubenswrapper[4782]: I0202 10:56:42.586044 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvg7l\" (UniqueName: \"kubernetes.io/projected/ce57fffc-4d75-495f-b7ed-28676054f90e-kube-api-access-hvg7l\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.031236 4782 generic.go:334] "Generic (PLEG): container finished" podID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" containerID="ce6913cbdfadb84393c08d16197643efcccd14ec7c86e1016dba2acac54b37e6" exitCode=0 Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.031308 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4d24k" event={"ID":"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7","Type":"ContainerDied","Data":"ce6913cbdfadb84393c08d16197643efcccd14ec7c86e1016dba2acac54b37e6"} Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.033430 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-377c-account-create-update-4zm4s" event={"ID":"bfde9ba3-fda5-496b-8ee5-52430e61f02a","Type":"ContainerDied","Data":"6fa69f75d3ab61cbf3d27efd1496c902da7a79fa8b733a929d055ffcecfa49f8"} Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.033455 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa69f75d3ab61cbf3d27efd1496c902da7a79fa8b733a929d055ffcecfa49f8" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.033489 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-377c-account-create-update-4zm4s" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.035631 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l6d9n" event={"ID":"ce57fffc-4d75-495f-b7ed-28676054f90e","Type":"ContainerDied","Data":"4ee47379cf84f1e497897eaa6502d9f20f12d223c36ef2bb6fab28916234cc24"} Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.035679 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l6d9n" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.035685 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee47379cf84f1e497897eaa6502d9f20f12d223c36ef2bb6fab28916234cc24" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.670547 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 10:56:43 crc kubenswrapper[4782]: E0202 10:56:43.671438 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce57fffc-4d75-495f-b7ed-28676054f90e" containerName="mariadb-database-create" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.671546 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce57fffc-4d75-495f-b7ed-28676054f90e" containerName="mariadb-database-create" Feb 02 10:56:43 crc kubenswrapper[4782]: E0202 10:56:43.671614 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" containerName="mariadb-account-create-update" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.671689 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" containerName="mariadb-account-create-update" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.671900 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce57fffc-4d75-495f-b7ed-28676054f90e" containerName="mariadb-database-create" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.671973 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" containerName="mariadb-account-create-update" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.672502 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.684578 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.810175 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.810540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.823948 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.824993 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.826838 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.833271 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.913070 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.915122 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.915333 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.915428 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.915504 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.944231 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") pod \"keystone-db-create-77ps5\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:43 crc kubenswrapper[4782]: I0202 10:56:43.987769 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.017230 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.017370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.018339 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.042091 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") pod \"keystone-0259-account-create-update-n5p89\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.130705 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.131927 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.137453 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.144377 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.220731 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.220850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.260066 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.262149 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.269945 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.274868 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.329386 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.329446 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.329523 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.329660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.331381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.357211 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") pod \"placement-db-create-6cg8m\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.436225 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.436284 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.437383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.453529 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.463842 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") pod \"placement-2124-account-create-update-npd9h\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.480206 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.504550 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.541762 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") pod \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.542032 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") pod \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\" (UID: \"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7\") " Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.543106 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" (UID: "e9ee52cc-7cc9-46d3-aed7-67cdc48551c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.545254 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz" (OuterVolumeSpecName: "kube-api-access-qw7cz") pod "e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" (UID: "e9ee52cc-7cc9-46d3-aed7-67cdc48551c7"). InnerVolumeSpecName "kube-api-access-qw7cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.591761 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.620985 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 10:56:44 crc kubenswrapper[4782]: E0202 10:56:44.621743 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" containerName="mariadb-account-create-update" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.621767 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" containerName="mariadb-account-create-update" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.622294 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" containerName="mariadb-account-create-update" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.622800 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.629254 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.629451 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-57vkh" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.645491 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.645523 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw7cz\" (UniqueName: \"kubernetes.io/projected/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7-kube-api-access-qw7cz\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.647101 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.746648 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.746984 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.747006 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.747037 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.764384 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.848186 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.848233 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.848263 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.848600 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.855094 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.864632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.873090 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.879321 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") pod \"glance-db-sync-bwx58\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:44 crc kubenswrapper[4782]: I0202 10:56:44.985112 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx58" Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.065264 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0259-account-create-update-n5p89" event={"ID":"80dad8de-560e-4ff5-b196-aa0bbbc2be15","Type":"ContainerStarted","Data":"a7e9a4e8ac03aa75d7d2867e0b6e6e12cc8a9019e7c6d838c9869a17f5c4688b"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.067749 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0259-account-create-update-n5p89" event={"ID":"80dad8de-560e-4ff5-b196-aa0bbbc2be15","Type":"ContainerStarted","Data":"249c5a98a778b98fa438ebd5fb9b61464cc131eb48f90e0afab4c1117206b06b"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.071490 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.099843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77ps5" event={"ID":"d561a4a7-bb99-43c6-859e-e3269a35a073","Type":"ContainerStarted","Data":"6d8d47213c18788507ca77e5f6162eb6c017b157cfec70f1dfb0ba7075187097"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.100067 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77ps5" event={"ID":"d561a4a7-bb99-43c6-859e-e3269a35a073","Type":"ContainerStarted","Data":"affd51ba873c2ac717264a2c48b401c76e91abb5ac07f259af5a91e5d4c528f2"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.107254 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4d24k" event={"ID":"e9ee52cc-7cc9-46d3-aed7-67cdc48551c7","Type":"ContainerDied","Data":"5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.107435 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce4a8ae5f9d9582b5fa591cecf849d094c378872af79d893e6b03c4fdd01e43" Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.107570 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4d24k" Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.109465 4782 generic.go:334] "Generic (PLEG): container finished" podID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerID="4b22530b4335201f0edeaaeb102aa0e0c1fe781965be9a91cd8a38308cd04cdb" exitCode=0 Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.109528 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerDied","Data":"4b22530b4335201f0edeaaeb102aa0e0c1fe781965be9a91cd8a38308cd04cdb"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.111098 4782 generic.go:334] "Generic (PLEG): container finished" podID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerID="391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb" exitCode=0 Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.111124 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerDied","Data":"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb"} Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.116067 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0259-account-create-update-n5p89" podStartSLOduration=2.11605171 podStartE2EDuration="2.11605171s" podCreationTimestamp="2026-02-02 10:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:56:45.095885673 +0000 UTC m=+1084.980078389" watchObservedRunningTime="2026-02-02 10:56:45.11605171 +0000 UTC m=+1085.000244426" Feb 02 10:56:45 crc kubenswrapper[4782]: W0202 10:56:45.265930 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb358cda4_3c47_4270_ada7_f7653d5da96f.slice/crio-1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed WatchSource:0}: Error finding container 1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed: Status 404 returned error can't find the container with id 1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.326794 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 10:56:45 crc kubenswrapper[4782]: I0202 10:56:45.755129 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 10:56:45 crc kubenswrapper[4782]: W0202 10:56:45.766291 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f885e8a_3dc8_4c07_ae3c_4c8ab072abc0.slice/crio-4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54 WatchSource:0}: Error finding container 4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54: Status 404 returned error can't find the container with id 4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.118116 4782 generic.go:334] "Generic (PLEG): container finished" podID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" containerID="91ff00aa29fb6af4c20c4ab6c7010da35390db314e4a9d0dc6101bd74c8cfe7c" exitCode=0 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.118186 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6cg8m" event={"ID":"1db12436-a377-40c9-bc4e-9fe301b0b4cb","Type":"ContainerDied","Data":"91ff00aa29fb6af4c20c4ab6c7010da35390db314e4a9d0dc6101bd74c8cfe7c"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.118212 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6cg8m" event={"ID":"1db12436-a377-40c9-bc4e-9fe301b0b4cb","Type":"ContainerStarted","Data":"df41be568da50845f36d15cb17ab4937618a8a95d19a89ea0f84aa0e72e17800"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.120361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerStarted","Data":"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.120667 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.122905 4782 generic.go:334] "Generic (PLEG): container finished" podID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" containerID="a7e9a4e8ac03aa75d7d2867e0b6e6e12cc8a9019e7c6d838c9869a17f5c4688b" exitCode=0 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.122954 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0259-account-create-update-n5p89" event={"ID":"80dad8de-560e-4ff5-b196-aa0bbbc2be15","Type":"ContainerDied","Data":"a7e9a4e8ac03aa75d7d2867e0b6e6e12cc8a9019e7c6d838c9869a17f5c4688b"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.124344 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx58" event={"ID":"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0","Type":"ContainerStarted","Data":"4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.125858 4782 generic.go:334] "Generic (PLEG): container finished" podID="d561a4a7-bb99-43c6-859e-e3269a35a073" containerID="6d8d47213c18788507ca77e5f6162eb6c017b157cfec70f1dfb0ba7075187097" exitCode=0 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.125911 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77ps5" event={"ID":"d561a4a7-bb99-43c6-859e-e3269a35a073","Type":"ContainerDied","Data":"6d8d47213c18788507ca77e5f6162eb6c017b157cfec70f1dfb0ba7075187097"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.127962 4782 generic.go:334] "Generic (PLEG): container finished" podID="b358cda4-3c47-4270-ada7-f7653d5da96f" containerID="d918711ae10925784d0ab83a02dc8d40b553f98643dc5469d54bc38912d8020e" exitCode=0 Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.128002 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2124-account-create-update-npd9h" event={"ID":"b358cda4-3c47-4270-ada7-f7653d5da96f","Type":"ContainerDied","Data":"d918711ae10925784d0ab83a02dc8d40b553f98643dc5469d54bc38912d8020e"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.128018 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2124-account-create-update-npd9h" event={"ID":"b358cda4-3c47-4270-ada7-f7653d5da96f","Type":"ContainerStarted","Data":"1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.130337 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerStarted","Data":"8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7"} Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.130581 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.171711 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.518983218 podStartE2EDuration="57.171697269s" podCreationTimestamp="2026-02-02 10:55:49 +0000 UTC" firstStartedPulling="2026-02-02 10:55:52.262822875 +0000 UTC m=+1032.147015591" lastFinishedPulling="2026-02-02 10:56:10.915536926 +0000 UTC m=+1050.799729642" observedRunningTime="2026-02-02 10:56:46.166688736 +0000 UTC m=+1086.050881452" watchObservedRunningTime="2026-02-02 10:56:46.171697269 +0000 UTC m=+1086.055889985" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.277147 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.075167277 podStartE2EDuration="57.277124336s" podCreationTimestamp="2026-02-02 10:55:49 +0000 UTC" firstStartedPulling="2026-02-02 10:55:51.707113722 +0000 UTC m=+1031.591306438" lastFinishedPulling="2026-02-02 10:56:10.909070781 +0000 UTC m=+1050.793263497" observedRunningTime="2026-02-02 10:56:46.26678492 +0000 UTC m=+1086.150977636" watchObservedRunningTime="2026-02-02 10:56:46.277124336 +0000 UTC m=+1086.161317052" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.528869 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.687126 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") pod \"d561a4a7-bb99-43c6-859e-e3269a35a073\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.687273 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") pod \"d561a4a7-bb99-43c6-859e-e3269a35a073\" (UID: \"d561a4a7-bb99-43c6-859e-e3269a35a073\") " Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.688191 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d561a4a7-bb99-43c6-859e-e3269a35a073" (UID: "d561a4a7-bb99-43c6-859e-e3269a35a073"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.696866 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt" (OuterVolumeSpecName: "kube-api-access-mvgrt") pod "d561a4a7-bb99-43c6-859e-e3269a35a073" (UID: "d561a4a7-bb99-43c6-859e-e3269a35a073"). InnerVolumeSpecName "kube-api-access-mvgrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.790783 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvgrt\" (UniqueName: \"kubernetes.io/projected/d561a4a7-bb99-43c6-859e-e3269a35a073-kube-api-access-mvgrt\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:46 crc kubenswrapper[4782]: I0202 10:56:46.790829 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d561a4a7-bb99-43c6-859e-e3269a35a073-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.139080 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77ps5" Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.139616 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77ps5" event={"ID":"d561a4a7-bb99-43c6-859e-e3269a35a073","Type":"ContainerDied","Data":"affd51ba873c2ac717264a2c48b401c76e91abb5ac07f259af5a91e5d4c528f2"} Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.139688 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="affd51ba873c2ac717264a2c48b401c76e91abb5ac07f259af5a91e5d4c528f2" Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.472172 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:47 crc kubenswrapper[4782]: I0202 10:56:47.480328 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4d24k"] Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.127014 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.137973 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.150597 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.160963 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0259-account-create-update-n5p89" event={"ID":"80dad8de-560e-4ff5-b196-aa0bbbc2be15","Type":"ContainerDied","Data":"249c5a98a778b98fa438ebd5fb9b61464cc131eb48f90e0afab4c1117206b06b"} Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.161001 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249c5a98a778b98fa438ebd5fb9b61464cc131eb48f90e0afab4c1117206b06b" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.161021 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0259-account-create-update-n5p89" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.162515 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2124-account-create-update-npd9h" event={"ID":"b358cda4-3c47-4270-ada7-f7653d5da96f","Type":"ContainerDied","Data":"1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed"} Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.162547 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd4bff856b66d107afe59a3f692ce638e8c700bf98010f119ac0ac84464d6ed" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.162563 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2124-account-create-update-npd9h" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.163703 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6cg8m" event={"ID":"1db12436-a377-40c9-bc4e-9fe301b0b4cb","Type":"ContainerDied","Data":"df41be568da50845f36d15cb17ab4937618a8a95d19a89ea0f84aa0e72e17800"} Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.163719 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df41be568da50845f36d15cb17ab4937618a8a95d19a89ea0f84aa0e72e17800" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.163753 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6cg8m" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.219205 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") pod \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.219271 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") pod \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\" (UID: \"1db12436-a377-40c9-bc4e-9fe301b0b4cb\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.221025 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1db12436-a377-40c9-bc4e-9fe301b0b4cb" (UID: "1db12436-a377-40c9-bc4e-9fe301b0b4cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.237924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r" (OuterVolumeSpecName: "kube-api-access-vj25r") pod "1db12436-a377-40c9-bc4e-9fe301b0b4cb" (UID: "1db12436-a377-40c9-bc4e-9fe301b0b4cb"). InnerVolumeSpecName "kube-api-access-vj25r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.320935 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") pod \"b358cda4-3c47-4270-ada7-f7653d5da96f\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321097 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") pod \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321180 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") pod \"b358cda4-3c47-4270-ada7-f7653d5da96f\" (UID: \"b358cda4-3c47-4270-ada7-f7653d5da96f\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321207 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") pod \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\" (UID: \"80dad8de-560e-4ff5-b196-aa0bbbc2be15\") " Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321664 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db12436-a377-40c9-bc4e-9fe301b0b4cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.321684 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj25r\" (UniqueName: \"kubernetes.io/projected/1db12436-a377-40c9-bc4e-9fe301b0b4cb-kube-api-access-vj25r\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.322566 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b358cda4-3c47-4270-ada7-f7653d5da96f" (UID: "b358cda4-3c47-4270-ada7-f7653d5da96f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.322847 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80dad8de-560e-4ff5-b196-aa0bbbc2be15" (UID: "80dad8de-560e-4ff5-b196-aa0bbbc2be15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.324946 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj" (OuterVolumeSpecName: "kube-api-access-ng8tj") pod "80dad8de-560e-4ff5-b196-aa0bbbc2be15" (UID: "80dad8de-560e-4ff5-b196-aa0bbbc2be15"). InnerVolumeSpecName "kube-api-access-ng8tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.325623 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62" (OuterVolumeSpecName: "kube-api-access-knc62") pod "b358cda4-3c47-4270-ada7-f7653d5da96f" (UID: "b358cda4-3c47-4270-ada7-f7653d5da96f"). InnerVolumeSpecName "kube-api-access-knc62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.423138 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knc62\" (UniqueName: \"kubernetes.io/projected/b358cda4-3c47-4270-ada7-f7653d5da96f-kube-api-access-knc62\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.423188 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng8tj\" (UniqueName: \"kubernetes.io/projected/80dad8de-560e-4ff5-b196-aa0bbbc2be15-kube-api-access-ng8tj\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.423201 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b358cda4-3c47-4270-ada7-f7653d5da96f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.423212 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80dad8de-560e-4ff5-b196-aa0bbbc2be15-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:56:48 crc kubenswrapper[4782]: I0202 10:56:48.857306 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ee52cc-7cc9-46d3-aed7-67cdc48551c7" path="/var/lib/kubelet/pods/e9ee52cc-7cc9-46d3-aed7-67cdc48551c7/volumes" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.478444 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 10:56:52 crc kubenswrapper[4782]: E0202 10:56:52.479167 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479182 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: E0202 10:56:52.479215 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b358cda4-3c47-4270-ada7-f7653d5da96f" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479223 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b358cda4-3c47-4270-ada7-f7653d5da96f" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: E0202 10:56:52.479242 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479250 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: E0202 10:56:52.479282 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d561a4a7-bb99-43c6-859e-e3269a35a073" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479289 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d561a4a7-bb99-43c6-859e-e3269a35a073" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479447 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b358cda4-3c47-4270-ada7-f7653d5da96f" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479464 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d561a4a7-bb99-43c6-859e-e3269a35a073" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479478 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" containerName="mariadb-account-create-update" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.479489 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" containerName="mariadb-database-create" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.480083 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.482313 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.505596 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.591208 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.591263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.693422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.693492 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.694375 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.714965 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") pod \"root-account-create-update-6jdgj\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:52 crc kubenswrapper[4782]: I0202 10:56:52.796747 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6jdgj" Feb 02 10:56:53 crc kubenswrapper[4782]: I0202 10:56:53.555849 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 10:56:56 crc kubenswrapper[4782]: I0202 10:56:56.009346 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sv8l5" podUID="b009ca1c-fc93-4724-9275-c44039256469" containerName="ovn-controller" probeResult="failure" output=< Feb 02 10:56:56 crc kubenswrapper[4782]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 10:56:56 crc kubenswrapper[4782]: > Feb 02 10:57:00 crc kubenswrapper[4782]: I0202 10:57:00.703804 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.107824 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.253753 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.290911 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sv8l5" podUID="b009ca1c-fc93-4724-9275-c44039256469" containerName="ovn-controller" probeResult="failure" output=< Feb 02 10:57:01 crc kubenswrapper[4782]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 10:57:01 crc kubenswrapper[4782]: > Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.295208 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zs65k" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.347779 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.348832 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.371011 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.453375 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.453439 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.555003 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.555261 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.555568 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.556450 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.557473 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.577664 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.589576 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.591681 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.622003 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.656769 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.656990 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.664412 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.664585 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") pod \"cinder-db-create-xzm82\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.683368 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:01 crc kubenswrapper[4782]: E0202 10:57:01.717055 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 02 10:57:01 crc kubenswrapper[4782]: E0202 10:57:01.717204 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xmdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-bwx58_openstack(1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:57:01 crc kubenswrapper[4782]: E0202 10:57:01.720714 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-bwx58" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.757970 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758020 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758054 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758102 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758140 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758226 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.758257 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.759308 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.800969 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.801985 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.836610 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.838111 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859151 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859409 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859447 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859574 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859608 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859631 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859682 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.859981 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.860033 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.860693 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.861222 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.862148 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.865275 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") pod \"barbican-db-create-q97pt\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.871980 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.960691 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.960789 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.960872 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.960900 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.964926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") pod \"ovn-controller-sv8l5-config-h25fz\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:01 crc kubenswrapper[4782]: I0202 10:57:01.965932 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.046612 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.062576 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.062695 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.062756 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.062786 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.063819 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.063822 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.112204 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") pod \"neutron-db-create-7dbcc\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.115866 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.116807 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.121109 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.121331 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.121518 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9pmlq" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.121773 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.139223 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") pod \"barbican-8017-account-create-update-t6d9m\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.156211 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.161979 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.235619 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.242008 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.243200 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.249097 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.252709 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.266366 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.266434 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.266486 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: E0202 10:57:02.341089 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-bwx58" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367700 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367778 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367834 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367862 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.367907 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.380720 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.385364 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.412421 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") pod \"keystone-db-sync-v4g2v\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.441543 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.470871 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.471320 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.473382 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.511869 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.513873 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.518751 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.520778 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") pod \"cinder-0e36-account-create-update-f5556\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.565718 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.595133 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.676107 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.676208 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.782360 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.782966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.783853 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.858845 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") pod \"neutron-63a1-account-create-update-4kn5m\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.860004 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:02 crc kubenswrapper[4782]: I0202 10:57:02.974958 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.320946 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.332346 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.348218 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q97pt" event={"ID":"68e5ac2b-72a8-46be-839a-fe639916a32e","Type":"ContainerStarted","Data":"0a50f00bb9f097f6fa3bdc79dd91c54109f518e4daed13ca669b1c0b2aa64a56"} Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.350999 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6jdgj" event={"ID":"29024188-b374-45b7-ad85-b2d4ca88b485","Type":"ContainerStarted","Data":"59931c74c238f44b10e52c4d20d13f519cae8b3cf2b301df562ef56ffaee122d"} Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.362910 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 10:57:03 crc kubenswrapper[4782]: W0202 10:57:03.370811 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9cc96ce_182b_4231_a5e9_10197e083077.slice/crio-9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622 WatchSource:0}: Error finding container 9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622: Status 404 returned error can't find the container with id 9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622 Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.603486 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.644284 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.700154 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.723706 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 10:57:03 crc kubenswrapper[4782]: I0202 10:57:03.779310 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 10:57:03 crc kubenswrapper[4782]: W0202 10:57:03.784979 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod843d8da2_ab8c_4938_be4b_aa67af531e1e.slice/crio-e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce WatchSource:0}: Error finding container e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce: Status 404 returned error can't find the container with id e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.363113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8017-account-create-update-t6d9m" event={"ID":"53ddb047-8931-415b-8d0f-d0f73b72c8b3","Type":"ContainerStarted","Data":"642f3a52732b34bccf9c9fbc304bd2cfce8dc967c11a5c31acc742832089e402"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.363480 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8017-account-create-update-t6d9m" event={"ID":"53ddb047-8931-415b-8d0f-d0f73b72c8b3","Type":"ContainerStarted","Data":"cf9abdd02e5074feef030e5dd66f4f015eb1117f9a0e9921a5c0f13149d8cfb5"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.365093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v4g2v" event={"ID":"843d8da2-ab8c-4938-be4b-aa67af531e1e","Type":"ContainerStarted","Data":"e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.367188 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e36-account-create-update-f5556" event={"ID":"c3c77267-9133-440d-9f4e-536b2a021fdc","Type":"ContainerStarted","Data":"8e0d398b0286ba353cd173b897d449b2563fd8596bcbf1161ae3a708c88b87ef"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.367236 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e36-account-create-update-f5556" event={"ID":"c3c77267-9133-440d-9f4e-536b2a021fdc","Type":"ContainerStarted","Data":"03814b857f1ea9d519135baa096d1211b12fbe8c1aa5ce0a643e569e44214fa5"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.371967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-63a1-account-create-update-4kn5m" event={"ID":"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69","Type":"ContainerStarted","Data":"f0e0c9ec29176a7805dbb55ca0554bf08656a1bb5da6a9295d6c51196f8c9acf"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.372021 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-63a1-account-create-update-4kn5m" event={"ID":"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69","Type":"ContainerStarted","Data":"4922ee8804a4e7e6bcbb57aadb57f32946333f0f7b3f9634c0cd775bf2cae365"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.376089 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6jdgj" event={"ID":"29024188-b374-45b7-ad85-b2d4ca88b485","Type":"ContainerDied","Data":"9024b9d44f2a9c5bd7aaa4dc9abd2a12f77d4a6bdfae488ee552a49cc6449554"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.375915 4782 generic.go:334] "Generic (PLEG): container finished" podID="29024188-b374-45b7-ad85-b2d4ca88b485" containerID="9024b9d44f2a9c5bd7aaa4dc9abd2a12f77d4a6bdfae488ee552a49cc6449554" exitCode=0 Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.379248 4782 generic.go:334] "Generic (PLEG): container finished" podID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" containerID="86c67676caca480b43ace8b3b556dc1c7777a8a4b569eb0de34ba6545c1ccf6c" exitCode=0 Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.379343 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xzm82" event={"ID":"b78c9d8b-0793-4e57-8a3d-ba7303f12d37","Type":"ContainerDied","Data":"86c67676caca480b43ace8b3b556dc1c7777a8a4b569eb0de34ba6545c1ccf6c"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.379380 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xzm82" event={"ID":"b78c9d8b-0793-4e57-8a3d-ba7303f12d37","Type":"ContainerStarted","Data":"96a9faaddad4cf88e7bfd92da474c6e44adac17cad51fedc68fc17de29153d22"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.381266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-h25fz" event={"ID":"e9cc96ce-182b-4231-a5e9-10197e083077","Type":"ContainerStarted","Data":"be4a6da8c7bc821537f4458c73cbb541469bc749b77b4d5f396a7e71bf22fd01"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.381305 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-h25fz" event={"ID":"e9cc96ce-182b-4231-a5e9-10197e083077","Type":"ContainerStarted","Data":"9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.383409 4782 generic.go:334] "Generic (PLEG): container finished" podID="68e5ac2b-72a8-46be-839a-fe639916a32e" containerID="469ae18dd42598dba552dffdd5607faf35c16e63cd9d3d0f900d45cc0954f86f" exitCode=0 Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.383585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q97pt" event={"ID":"68e5ac2b-72a8-46be-839a-fe639916a32e","Type":"ContainerDied","Data":"469ae18dd42598dba552dffdd5607faf35c16e63cd9d3d0f900d45cc0954f86f"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.385467 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dbcc" event={"ID":"821635c8-3cf1-408b-8949-81dbc48b07b6","Type":"ContainerStarted","Data":"489df32e7e5f6a2407566ee0433e9eb8f24a84a3bc401deba1b69bf5b52b02e2"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.385504 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dbcc" event={"ID":"821635c8-3cf1-408b-8949-81dbc48b07b6","Type":"ContainerStarted","Data":"70b6d641abeabe7c2253d823ad71d899e1791cab27661eb48d3cc66649421cca"} Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.414270 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8017-account-create-update-t6d9m" podStartSLOduration=3.414248333 podStartE2EDuration="3.414248333s" podCreationTimestamp="2026-02-02 10:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.396894237 +0000 UTC m=+1104.281086953" watchObservedRunningTime="2026-02-02 10:57:04.414248333 +0000 UTC m=+1104.298441049" Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.416541 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0e36-account-create-update-f5556" podStartSLOduration=2.416525418 podStartE2EDuration="2.416525418s" podCreationTimestamp="2026-02-02 10:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.415238122 +0000 UTC m=+1104.299430838" watchObservedRunningTime="2026-02-02 10:57:04.416525418 +0000 UTC m=+1104.300718134" Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.442870 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-63a1-account-create-update-4kn5m" podStartSLOduration=2.442845802 podStartE2EDuration="2.442845802s" podCreationTimestamp="2026-02-02 10:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.441251166 +0000 UTC m=+1104.325443882" watchObservedRunningTime="2026-02-02 10:57:04.442845802 +0000 UTC m=+1104.327038518" Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.575442 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sv8l5-config-h25fz" podStartSLOduration=3.575417845 podStartE2EDuration="3.575417845s" podCreationTimestamp="2026-02-02 10:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.54725831 +0000 UTC m=+1104.431451026" watchObservedRunningTime="2026-02-02 10:57:04.575417845 +0000 UTC m=+1104.459610561" Feb 02 10:57:04 crc kubenswrapper[4782]: I0202 10:57:04.626917 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-7dbcc" podStartSLOduration=3.626883148 podStartE2EDuration="3.626883148s" podCreationTimestamp="2026-02-02 10:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:04.575017764 +0000 UTC m=+1104.459210480" watchObservedRunningTime="2026-02-02 10:57:04.626883148 +0000 UTC m=+1104.511075874" Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.400262 4782 generic.go:334] "Generic (PLEG): container finished" podID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" containerID="642f3a52732b34bccf9c9fbc304bd2cfce8dc967c11a5c31acc742832089e402" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.400365 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8017-account-create-update-t6d9m" event={"ID":"53ddb047-8931-415b-8d0f-d0f73b72c8b3","Type":"ContainerDied","Data":"642f3a52732b34bccf9c9fbc304bd2cfce8dc967c11a5c31acc742832089e402"} Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.403240 4782 generic.go:334] "Generic (PLEG): container finished" podID="821635c8-3cf1-408b-8949-81dbc48b07b6" containerID="489df32e7e5f6a2407566ee0433e9eb8f24a84a3bc401deba1b69bf5b52b02e2" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.403333 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dbcc" event={"ID":"821635c8-3cf1-408b-8949-81dbc48b07b6","Type":"ContainerDied","Data":"489df32e7e5f6a2407566ee0433e9eb8f24a84a3bc401deba1b69bf5b52b02e2"} Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.405442 4782 generic.go:334] "Generic (PLEG): container finished" podID="c3c77267-9133-440d-9f4e-536b2a021fdc" containerID="8e0d398b0286ba353cd173b897d449b2563fd8596bcbf1161ae3a708c88b87ef" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.405503 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e36-account-create-update-f5556" event={"ID":"c3c77267-9133-440d-9f4e-536b2a021fdc","Type":"ContainerDied","Data":"8e0d398b0286ba353cd173b897d449b2563fd8596bcbf1161ae3a708c88b87ef"} Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.409100 4782 generic.go:334] "Generic (PLEG): container finished" podID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" containerID="f0e0c9ec29176a7805dbb55ca0554bf08656a1bb5da6a9295d6c51196f8c9acf" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.409174 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-63a1-account-create-update-4kn5m" event={"ID":"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69","Type":"ContainerDied","Data":"f0e0c9ec29176a7805dbb55ca0554bf08656a1bb5da6a9295d6c51196f8c9acf"} Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.411164 4782 generic.go:334] "Generic (PLEG): container finished" podID="e9cc96ce-182b-4231-a5e9-10197e083077" containerID="be4a6da8c7bc821537f4458c73cbb541469bc749b77b4d5f396a7e71bf22fd01" exitCode=0 Feb 02 10:57:05 crc kubenswrapper[4782]: I0202 10:57:05.411346 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-h25fz" event={"ID":"e9cc96ce-182b-4231-a5e9-10197e083077","Type":"ContainerDied","Data":"be4a6da8c7bc821537f4458c73cbb541469bc749b77b4d5f396a7e71bf22fd01"} Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.260182 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.270821 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sv8l5" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.337554 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") pod \"68e5ac2b-72a8-46be-839a-fe639916a32e\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.337694 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") pod \"68e5ac2b-72a8-46be-839a-fe639916a32e\" (UID: \"68e5ac2b-72a8-46be-839a-fe639916a32e\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.342440 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68e5ac2b-72a8-46be-839a-fe639916a32e" (UID: "68e5ac2b-72a8-46be-839a-fe639916a32e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.368927 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9" (OuterVolumeSpecName: "kube-api-access-bc7t9") pod "68e5ac2b-72a8-46be-839a-fe639916a32e" (UID: "68e5ac2b-72a8-46be-839a-fe639916a32e"). InnerVolumeSpecName "kube-api-access-bc7t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.440627 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68e5ac2b-72a8-46be-839a-fe639916a32e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.440963 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc7t9\" (UniqueName: \"kubernetes.io/projected/68e5ac2b-72a8-46be-839a-fe639916a32e-kube-api-access-bc7t9\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.456764 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xzm82" event={"ID":"b78c9d8b-0793-4e57-8a3d-ba7303f12d37","Type":"ContainerDied","Data":"96a9faaddad4cf88e7bfd92da474c6e44adac17cad51fedc68fc17de29153d22"} Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.456800 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96a9faaddad4cf88e7bfd92da474c6e44adac17cad51fedc68fc17de29153d22" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.469100 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-q97pt" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.471228 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-q97pt" event={"ID":"68e5ac2b-72a8-46be-839a-fe639916a32e","Type":"ContainerDied","Data":"0a50f00bb9f097f6fa3bdc79dd91c54109f518e4daed13ca669b1c0b2aa64a56"} Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.471272 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a50f00bb9f097f6fa3bdc79dd91c54109f518e4daed13ca669b1c0b2aa64a56" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.480472 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.657379 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") pod \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.657437 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") pod \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\" (UID: \"b78c9d8b-0793-4e57-8a3d-ba7303f12d37\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.658318 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b78c9d8b-0793-4e57-8a3d-ba7303f12d37" (UID: "b78c9d8b-0793-4e57-8a3d-ba7303f12d37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.673275 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8" (OuterVolumeSpecName: "kube-api-access-d45c8") pod "b78c9d8b-0793-4e57-8a3d-ba7303f12d37" (UID: "b78c9d8b-0793-4e57-8a3d-ba7303f12d37"). InnerVolumeSpecName "kube-api-access-d45c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.727975 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6jdgj" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.759927 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") pod \"29024188-b374-45b7-ad85-b2d4ca88b485\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.760036 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") pod \"29024188-b374-45b7-ad85-b2d4ca88b485\" (UID: \"29024188-b374-45b7-ad85-b2d4ca88b485\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.760312 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.760326 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d45c8\" (UniqueName: \"kubernetes.io/projected/b78c9d8b-0793-4e57-8a3d-ba7303f12d37-kube-api-access-d45c8\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.760579 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29024188-b374-45b7-ad85-b2d4ca88b485" (UID: "29024188-b374-45b7-ad85-b2d4ca88b485"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.768417 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq" (OuterVolumeSpecName: "kube-api-access-jrhrq") pod "29024188-b374-45b7-ad85-b2d4ca88b485" (UID: "29024188-b374-45b7-ad85-b2d4ca88b485"). InnerVolumeSpecName "kube-api-access-jrhrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.861681 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29024188-b374-45b7-ad85-b2d4ca88b485-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.861717 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhrq\" (UniqueName: \"kubernetes.io/projected/29024188-b374-45b7-ad85-b2d4ca88b485-kube-api-access-jrhrq\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.903032 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.963285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") pod \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.963414 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") pod \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\" (UID: \"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69\") " Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.963913 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" (UID: "7f8a5cce-1311-4cb0-9a7b-d636e27d6e69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:06 crc kubenswrapper[4782]: I0202 10:57:06.969872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d" (OuterVolumeSpecName: "kube-api-access-mqd6d") pod "7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" (UID: "7f8a5cce-1311-4cb0-9a7b-d636e27d6e69"). InnerVolumeSpecName "kube-api-access-mqd6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.065546 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqd6d\" (UniqueName: \"kubernetes.io/projected/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-kube-api-access-mqd6d\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.065585 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.330192 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.340367 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.350988 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.361788 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376294 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376354 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") pod \"821635c8-3cf1-408b-8949-81dbc48b07b6\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376425 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376430 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376459 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376503 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") pod \"821635c8-3cf1-408b-8949-81dbc48b07b6\" (UID: \"821635c8-3cf1-408b-8949-81dbc48b07b6\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376540 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") pod \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376697 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376758 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") pod \"c3c77267-9133-440d-9f4e-536b2a021fdc\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376820 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") pod \"c3c77267-9133-440d-9f4e-536b2a021fdc\" (UID: \"c3c77267-9133-440d-9f4e-536b2a021fdc\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376903 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") pod \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\" (UID: \"53ddb047-8931-415b-8d0f-d0f73b72c8b3\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376934 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376965 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.376991 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") pod \"e9cc96ce-182b-4231-a5e9-10197e083077\" (UID: \"e9cc96ce-182b-4231-a5e9-10197e083077\") " Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.377032 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53ddb047-8931-415b-8d0f-d0f73b72c8b3" (UID: "53ddb047-8931-415b-8d0f-d0f73b72c8b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.377050 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "821635c8-3cf1-408b-8949-81dbc48b07b6" (UID: "821635c8-3cf1-408b-8949-81dbc48b07b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.377740 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3c77267-9133-440d-9f4e-536b2a021fdc" (UID: "c3c77267-9133-440d-9f4e-536b2a021fdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.377945 4782 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378192 4782 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378211 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/821635c8-3cf1-408b-8949-81dbc48b07b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378253 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ddb047-8931-415b-8d0f-d0f73b72c8b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378264 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3c77267-9133-440d-9f4e-536b2a021fdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378392 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.378449 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run" (OuterVolumeSpecName: "var-run") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.380674 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2" (OuterVolumeSpecName: "kube-api-access-gfpw2") pod "c3c77267-9133-440d-9f4e-536b2a021fdc" (UID: "c3c77267-9133-440d-9f4e-536b2a021fdc"). InnerVolumeSpecName "kube-api-access-gfpw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.396234 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts" (OuterVolumeSpecName: "scripts") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.398878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5" (OuterVolumeSpecName: "kube-api-access-nxbg5") pod "e9cc96ce-182b-4231-a5e9-10197e083077" (UID: "e9cc96ce-182b-4231-a5e9-10197e083077"). InnerVolumeSpecName "kube-api-access-nxbg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.416300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w" (OuterVolumeSpecName: "kube-api-access-f8w9w") pod "821635c8-3cf1-408b-8949-81dbc48b07b6" (UID: "821635c8-3cf1-408b-8949-81dbc48b07b6"). InnerVolumeSpecName "kube-api-access-f8w9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.416524 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll" (OuterVolumeSpecName: "kube-api-access-rtmll") pod "53ddb047-8931-415b-8d0f-d0f73b72c8b3" (UID: "53ddb047-8931-415b-8d0f-d0f73b72c8b3"). InnerVolumeSpecName "kube-api-access-rtmll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480679 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtmll\" (UniqueName: \"kubernetes.io/projected/53ddb047-8931-415b-8d0f-d0f73b72c8b3-kube-api-access-rtmll\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480716 4782 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9cc96ce-182b-4231-a5e9-10197e083077-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480729 4782 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480740 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9cc96ce-182b-4231-a5e9-10197e083077-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480750 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8w9w\" (UniqueName: \"kubernetes.io/projected/821635c8-3cf1-408b-8949-81dbc48b07b6-kube-api-access-f8w9w\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480760 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxbg5\" (UniqueName: \"kubernetes.io/projected/e9cc96ce-182b-4231-a5e9-10197e083077-kube-api-access-nxbg5\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.480770 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfpw2\" (UniqueName: \"kubernetes.io/projected/c3c77267-9133-440d-9f4e-536b2a021fdc-kube-api-access-gfpw2\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.489987 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6jdgj" event={"ID":"29024188-b374-45b7-ad85-b2d4ca88b485","Type":"ContainerDied","Data":"59931c74c238f44b10e52c4d20d13f519cae8b3cf2b301df562ef56ffaee122d"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.490046 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59931c74c238f44b10e52c4d20d13f519cae8b3cf2b301df562ef56ffaee122d" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.490127 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6jdgj" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.508020 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-h25fz" event={"ID":"e9cc96ce-182b-4231-a5e9-10197e083077","Type":"ContainerDied","Data":"9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.508087 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9391e44047f757697bd5504265794e7bacaa93018b4f7c703f10a49c0c3e6622" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.508194 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-h25fz" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.517291 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8017-account-create-update-t6d9m" event={"ID":"53ddb047-8931-415b-8d0f-d0f73b72c8b3","Type":"ContainerDied","Data":"cf9abdd02e5074feef030e5dd66f4f015eb1117f9a0e9921a5c0f13149d8cfb5"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.517318 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8017-account-create-update-t6d9m" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.517334 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9abdd02e5074feef030e5dd66f4f015eb1117f9a0e9921a5c0f13149d8cfb5" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.519356 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dbcc" event={"ID":"821635c8-3cf1-408b-8949-81dbc48b07b6","Type":"ContainerDied","Data":"70b6d641abeabe7c2253d823ad71d899e1791cab27661eb48d3cc66649421cca"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.519400 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b6d641abeabe7c2253d823ad71d899e1791cab27661eb48d3cc66649421cca" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.519474 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dbcc" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.531083 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e36-account-create-update-f5556" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.531081 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e36-account-create-update-f5556" event={"ID":"c3c77267-9133-440d-9f4e-536b2a021fdc","Type":"ContainerDied","Data":"03814b857f1ea9d519135baa096d1211b12fbe8c1aa5ce0a643e569e44214fa5"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.531195 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03814b857f1ea9d519135baa096d1211b12fbe8c1aa5ce0a643e569e44214fa5" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.539154 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xzm82" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.539782 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-63a1-account-create-update-4kn5m" Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.543800 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-63a1-account-create-update-4kn5m" event={"ID":"7f8a5cce-1311-4cb0-9a7b-d636e27d6e69","Type":"ContainerDied","Data":"4922ee8804a4e7e6bcbb57aadb57f32946333f0f7b3f9634c0cd775bf2cae365"} Feb 02 10:57:07 crc kubenswrapper[4782]: I0202 10:57:07.543838 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4922ee8804a4e7e6bcbb57aadb57f32946333f0f7b3f9634c0cd775bf2cae365" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.592880 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.603773 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sv8l5-config-h25fz"] Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.715883 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.716807 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3c77267-9133-440d-9f4e-536b2a021fdc" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.716889 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c77267-9133-440d-9f4e-536b2a021fdc" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.716966 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29024188-b374-45b7-ad85-b2d4ca88b485" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717020 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="29024188-b374-45b7-ad85-b2d4ca88b485" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717079 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e5ac2b-72a8-46be-839a-fe639916a32e" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717164 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e5ac2b-72a8-46be-839a-fe639916a32e" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717221 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717270 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717331 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717392 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717457 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9cc96ce-182b-4231-a5e9-10197e083077" containerName="ovn-config" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717533 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9cc96ce-182b-4231-a5e9-10197e083077" containerName="ovn-config" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717619 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717708 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: E0202 10:57:08.717790 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821635c8-3cf1-408b-8949-81dbc48b07b6" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.717876 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="821635c8-3cf1-408b-8949-81dbc48b07b6" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718123 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9cc96ce-182b-4231-a5e9-10197e083077" containerName="ovn-config" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718209 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718294 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e5ac2b-72a8-46be-839a-fe639916a32e" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718631 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718727 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718804 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3c77267-9133-440d-9f4e-536b2a021fdc" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718891 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="821635c8-3cf1-408b-8949-81dbc48b07b6" containerName="mariadb-database-create" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.718973 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="29024188-b374-45b7-ad85-b2d4ca88b485" containerName="mariadb-account-create-update" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.719737 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.723241 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.734835 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800219 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800264 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800297 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800401 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800608 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.800753 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.833836 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9cc96ce-182b-4231-a5e9-10197e083077" path="/var/lib/kubelet/pods/e9cc96ce-182b-4231-a5e9-10197e083077/volumes" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901830 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901917 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901939 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901972 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.901989 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.902037 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.902880 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.903142 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.903164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.903211 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.905545 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:08 crc kubenswrapper[4782]: I0202 10:57:08.924408 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") pod \"ovn-controller-sv8l5-config-qwn7m\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:09 crc kubenswrapper[4782]: I0202 10:57:09.040247 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:11 crc kubenswrapper[4782]: I0202 10:57:11.971346 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.588799 4782 generic.go:334] "Generic (PLEG): container finished" podID="099ffa73-778b-4dd4-acae-5efb663dfe17" containerID="602ab9da4d7f46c94dc61771e2c8b8b42a379bdff5c5bea8faa66082cc751118" exitCode=0 Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.588897 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-qwn7m" event={"ID":"099ffa73-778b-4dd4-acae-5efb663dfe17","Type":"ContainerDied","Data":"602ab9da4d7f46c94dc61771e2c8b8b42a379bdff5c5bea8faa66082cc751118"} Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.588931 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-qwn7m" event={"ID":"099ffa73-778b-4dd4-acae-5efb663dfe17","Type":"ContainerStarted","Data":"3bcb421a0ed0bb31e295fe54870b127532fe03089b480a8ab1e0677c96fadf81"} Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.590404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v4g2v" event={"ID":"843d8da2-ab8c-4938-be4b-aa67af531e1e","Type":"ContainerStarted","Data":"a2c467e584e5732352c3aaba01db962a0b1958e32d2c79a6365d1b8fe2d96e2c"} Feb 02 10:57:12 crc kubenswrapper[4782]: I0202 10:57:12.660563 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-v4g2v" podStartSLOduration=2.747393455 podStartE2EDuration="10.660538394s" podCreationTimestamp="2026-02-02 10:57:02 +0000 UTC" firstStartedPulling="2026-02-02 10:57:03.790406442 +0000 UTC m=+1103.674599158" lastFinishedPulling="2026-02-02 10:57:11.703551381 +0000 UTC m=+1111.587744097" observedRunningTime="2026-02-02 10:57:12.653625666 +0000 UTC m=+1112.537818392" watchObservedRunningTime="2026-02-02 10:57:12.660538394 +0000 UTC m=+1112.544731110" Feb 02 10:57:13 crc kubenswrapper[4782]: I0202 10:57:13.920344 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087473 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087576 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087588 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087633 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run" (OuterVolumeSpecName: "var-run") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087698 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087790 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087831 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.087866 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") pod \"099ffa73-778b-4dd4-acae-5efb663dfe17\" (UID: \"099ffa73-778b-4dd4-acae-5efb663dfe17\") " Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.088378 4782 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.088396 4782 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.089503 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.089954 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts" (OuterVolumeSpecName: "scripts") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.090269 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.103352 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk" (OuterVolumeSpecName: "kube-api-access-pzdkk") pod "099ffa73-778b-4dd4-acae-5efb663dfe17" (UID: "099ffa73-778b-4dd4-acae-5efb663dfe17"). InnerVolumeSpecName "kube-api-access-pzdkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.190454 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.190498 4782 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/099ffa73-778b-4dd4-acae-5efb663dfe17-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.190511 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzdkk\" (UniqueName: \"kubernetes.io/projected/099ffa73-778b-4dd4-acae-5efb663dfe17-kube-api-access-pzdkk\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.190526 4782 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/099ffa73-778b-4dd4-acae-5efb663dfe17-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.611962 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sv8l5-config-qwn7m" event={"ID":"099ffa73-778b-4dd4-acae-5efb663dfe17","Type":"ContainerDied","Data":"3bcb421a0ed0bb31e295fe54870b127532fe03089b480a8ab1e0677c96fadf81"} Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.612028 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcb421a0ed0bb31e295fe54870b127532fe03089b480a8ab1e0677c96fadf81" Feb 02 10:57:14 crc kubenswrapper[4782]: I0202 10:57:14.612129 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sv8l5-config-qwn7m" Feb 02 10:57:15 crc kubenswrapper[4782]: I0202 10:57:15.000952 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:15 crc kubenswrapper[4782]: I0202 10:57:15.009838 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sv8l5-config-qwn7m"] Feb 02 10:57:15 crc kubenswrapper[4782]: I0202 10:57:15.623365 4782 generic.go:334] "Generic (PLEG): container finished" podID="843d8da2-ab8c-4938-be4b-aa67af531e1e" containerID="a2c467e584e5732352c3aaba01db962a0b1958e32d2c79a6365d1b8fe2d96e2c" exitCode=0 Feb 02 10:57:15 crc kubenswrapper[4782]: I0202 10:57:15.623418 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v4g2v" event={"ID":"843d8da2-ab8c-4938-be4b-aa67af531e1e","Type":"ContainerDied","Data":"a2c467e584e5732352c3aaba01db962a0b1958e32d2c79a6365d1b8fe2d96e2c"} Feb 02 10:57:16 crc kubenswrapper[4782]: I0202 10:57:16.837710 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099ffa73-778b-4dd4-acae-5efb663dfe17" path="/var/lib/kubelet/pods/099ffa73-778b-4dd4-acae-5efb663dfe17/volumes" Feb 02 10:57:16 crc kubenswrapper[4782]: I0202 10:57:16.891760 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.037961 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") pod \"843d8da2-ab8c-4938-be4b-aa67af531e1e\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.038563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") pod \"843d8da2-ab8c-4938-be4b-aa67af531e1e\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.038861 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") pod \"843d8da2-ab8c-4938-be4b-aa67af531e1e\" (UID: \"843d8da2-ab8c-4938-be4b-aa67af531e1e\") " Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.044297 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2" (OuterVolumeSpecName: "kube-api-access-pvls2") pod "843d8da2-ab8c-4938-be4b-aa67af531e1e" (UID: "843d8da2-ab8c-4938-be4b-aa67af531e1e"). InnerVolumeSpecName "kube-api-access-pvls2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.061844 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "843d8da2-ab8c-4938-be4b-aa67af531e1e" (UID: "843d8da2-ab8c-4938-be4b-aa67af531e1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.087631 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data" (OuterVolumeSpecName: "config-data") pod "843d8da2-ab8c-4938-be4b-aa67af531e1e" (UID: "843d8da2-ab8c-4938-be4b-aa67af531e1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.142203 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.142248 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvls2\" (UniqueName: \"kubernetes.io/projected/843d8da2-ab8c-4938-be4b-aa67af531e1e-kube-api-access-pvls2\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.142261 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/843d8da2-ab8c-4938-be4b-aa67af531e1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.646142 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v4g2v" event={"ID":"843d8da2-ab8c-4938-be4b-aa67af531e1e","Type":"ContainerDied","Data":"e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce"} Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.646190 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e20283ae35f20eb57ed6ab460abe821456fab6128c714eba5f3a94fb7961e8ce" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.646237 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v4g2v" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.937181 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:17 crc kubenswrapper[4782]: E0202 10:57:17.937900 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843d8da2-ab8c-4938-be4b-aa67af531e1e" containerName="keystone-db-sync" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.937918 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="843d8da2-ab8c-4938-be4b-aa67af531e1e" containerName="keystone-db-sync" Feb 02 10:57:17 crc kubenswrapper[4782]: E0202 10:57:17.937952 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099ffa73-778b-4dd4-acae-5efb663dfe17" containerName="ovn-config" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.937969 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ffa73-778b-4dd4-acae-5efb663dfe17" containerName="ovn-config" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.938335 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="099ffa73-778b-4dd4-acae-5efb663dfe17" containerName="ovn-config" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.938352 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="843d8da2-ab8c-4938-be4b-aa67af531e1e" containerName="keystone-db-sync" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.939546 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.950274 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.951706 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.958842 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959367 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9pmlq" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959496 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959585 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959376 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.959730 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:57:17 crc kubenswrapper[4782]: I0202 10:57:17.997092 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059353 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059400 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059426 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059457 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059485 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059510 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059535 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059584 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059616 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.059632 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161185 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161205 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161227 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161253 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161302 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161328 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161357 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.161378 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.162253 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.162734 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.162876 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.163717 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.174968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.176936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.186128 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.187715 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.193869 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.254710 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") pod \"dnsmasq-dns-75bb4695fc-g7q5g\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.255920 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") pod \"keystone-bootstrap-wv5hq\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.272019 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.275921 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.469464 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.473466 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.478158 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.478876 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tpp6m" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.510834 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.579864 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.594252 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.594376 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.594490 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.599765 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.601050 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.612162 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.612262 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.612364 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k6962" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.622049 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.644729 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.645697 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.649758 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.649998 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.650164 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l47mf" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.699294 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700116 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700173 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700286 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700310 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700345 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700363 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.700388 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.701463 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.717630 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.718522 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.726898 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ntkkh" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.727072 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.727832 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.728818 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.731515 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.753143 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx58" event={"ID":"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0","Type":"ContainerStarted","Data":"266185adfe7e4eb354941537aab95c70eb532acbac93a799d1b437d19b25b6c7"} Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.776028 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.777826 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.791203 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.791385 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.804534 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808756 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808808 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808852 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808879 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808895 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808919 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808937 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.808977 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809012 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809067 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809088 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809106 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809158 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809248 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") pod \"barbican-db-sync-qjtml\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.809774 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.822772 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.852195 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qjtml" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.852583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.856634 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.875460 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.918410 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") pod \"placement-db-sync-9zhdd\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.953901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954265 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954319 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954349 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954443 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954480 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954506 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954536 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954631 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954807 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954849 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954874 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954936 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954964 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.954996 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.955031 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.964648 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.970534 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.971549 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.972439 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.988138 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:18 crc kubenswrapper[4782]: I0202 10:57:18.991276 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.006579 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.007266 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.008607 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.019686 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") pod \"dnsmasq-dns-745b9ddc8c-tg7wz\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.022807 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.036805 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") pod \"cinder-db-sync-rvrqj\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066593 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066624 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066651 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066687 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066772 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066791 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066852 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066880 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.066956 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.091045 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.091088 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.144013 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.144950 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.146406 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.153182 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.153717 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.177498 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.178043 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.184762 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.185897 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") pod \"neutron-db-sync-ztmll\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.197401 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.198057 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") pod \"ceilometer-0\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.240910 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bwx58" podStartSLOduration=3.710937413 podStartE2EDuration="35.240888636s" podCreationTimestamp="2026-02-02 10:56:44 +0000 UTC" firstStartedPulling="2026-02-02 10:56:45.769292314 +0000 UTC m=+1085.653485030" lastFinishedPulling="2026-02-02 10:57:17.299243527 +0000 UTC m=+1117.183436253" observedRunningTime="2026-02-02 10:57:18.880086923 +0000 UTC m=+1118.764279639" watchObservedRunningTime="2026-02-02 10:57:19.240888636 +0000 UTC m=+1119.125081362" Feb 02 10:57:19 crc kubenswrapper[4782]: W0202 10:57:19.241071 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c688f3_2e65_48cc_8394_c3b87053d840.slice/crio-19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105 WatchSource:0}: Error finding container 19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105: Status 404 returned error can't find the container with id 19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105 Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.286707 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.366276 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.405032 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.419014 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.606056 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:19 crc kubenswrapper[4782]: W0202 10:57:19.618496 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc847df54_dabc_4a1c_a7dc_4d5c69b548fe.slice/crio-2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873 WatchSource:0}: Error finding container 2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873: Status 404 returned error can't find the container with id 2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873 Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.770141 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv5hq" event={"ID":"c847df54-dabc-4a1c-a7dc-4d5c69b548fe","Type":"ContainerStarted","Data":"2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873"} Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.780602 4782 generic.go:334] "Generic (PLEG): container finished" podID="08c688f3-2e65-48cc-8394-c3b87053d840" containerID="9fd628845002c75a7f47756de2c6e38ec28b97e5347a7244221f8b582f05c57b" exitCode=0 Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.780655 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" event={"ID":"08c688f3-2e65-48cc-8394-c3b87053d840","Type":"ContainerDied","Data":"9fd628845002c75a7f47756de2c6e38ec28b97e5347a7244221f8b582f05c57b"} Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.780818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" event={"ID":"08c688f3-2e65-48cc-8394-c3b87053d840","Type":"ContainerStarted","Data":"19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105"} Feb 02 10:57:19 crc kubenswrapper[4782]: I0202 10:57:19.905642 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.004155 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.236790 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:20 crc kubenswrapper[4782]: W0202 10:57:20.249058 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbab971e_9d4a_4d47_b466_ec2110de7dfb.slice/crio-1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf WatchSource:0}: Error finding container 1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf: Status 404 returned error can't find the container with id 1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.522196 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622114 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622240 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622279 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622342 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.622447 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") pod \"08c688f3-2e65-48cc-8394-c3b87053d840\" (UID: \"08c688f3-2e65-48cc-8394-c3b87053d840\") " Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.648661 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl" (OuterVolumeSpecName: "kube-api-access-v9tgl") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "kube-api-access-v9tgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: W0202 10:57:20.649234 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8943d8a_337b_4852_9c11_55191a08a850.slice/crio-4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194 WatchSource:0}: Error finding container 4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194: Status 404 returned error can't find the container with id 4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194 Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.658807 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.675073 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.678279 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.686151 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.715236 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.724532 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config" (OuterVolumeSpecName: "config") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.724977 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.725014 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9tgl\" (UniqueName: \"kubernetes.io/projected/08c688f3-2e65-48cc-8394-c3b87053d840-kube-api-access-v9tgl\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.725025 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.725036 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.741467 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08c688f3-2e65-48cc-8394-c3b87053d840" (UID: "08c688f3-2e65-48cc-8394-c3b87053d840"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.826556 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c688f3-2e65-48cc-8394-c3b87053d840-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.910409 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.915883 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qjtml" event={"ID":"14e3fab7-be93-409c-a88e-85c8d0ca533c","Type":"ContainerStarted","Data":"cf01f314448485ff21bcd2728c714dedb197b922c6d0f496ca141e9405a41bab"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.939850 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" event={"ID":"08c688f3-2e65-48cc-8394-c3b87053d840","Type":"ContainerDied","Data":"19b98df85c47864f53fe311acb515cf9579ed36a3ccfc4ca4e13276a52fab105"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.939901 4782 scope.go:117] "RemoveContainer" containerID="9fd628845002c75a7f47756de2c6e38ec28b97e5347a7244221f8b582f05c57b" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.940289 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-g7q5g" Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.949337 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"995e3d21dc8fc39f728f7ea640cf5b2814a34afafd0aba1f79572a1482443e61"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.968459 4782 generic.go:334] "Generic (PLEG): container finished" podID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerID="d612f10c6156f6cb4afac9aec45e071dc15d31ca60fe0b15bf367f1991040e4f" exitCode=0 Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.968594 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerDied","Data":"d612f10c6156f6cb4afac9aec45e071dc15d31ca60fe0b15bf367f1991040e4f"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.968624 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerStarted","Data":"1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf"} Feb 02 10:57:20 crc kubenswrapper[4782]: I0202 10:57:20.993274 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvrqj" event={"ID":"bf4fe919-15fe-4478-be0f-8e3bf00147b4","Type":"ContainerStarted","Data":"2b88f70f23d6438ad4880535e90d03f7ddcd1e6596512bcb845cbde82cf71a29"} Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.010720 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zhdd" event={"ID":"173458b2-9a63-4456-9bc9-698d1414a679","Type":"ContainerStarted","Data":"84c341193c47fc4aa8a47eed674765c2cf34eb70060671ad9bf767eb2f34ee7a"} Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.053881 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztmll" event={"ID":"f8943d8a-337b-4852-9c11-55191a08a850","Type":"ContainerStarted","Data":"4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194"} Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.075191 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv5hq" event={"ID":"c847df54-dabc-4a1c-a7dc-4d5c69b548fe","Type":"ContainerStarted","Data":"d77ce47c81331449fe1e66732ce31fcd1c20618737ae71f8b83041e70b41f489"} Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.249445 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.252875 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-g7q5g"] Feb 02 10:57:21 crc kubenswrapper[4782]: I0202 10:57:21.274179 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wv5hq" podStartSLOduration=4.274158842 podStartE2EDuration="4.274158842s" podCreationTimestamp="2026-02-02 10:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:21.25317017 +0000 UTC m=+1121.137362896" watchObservedRunningTime="2026-02-02 10:57:21.274158842 +0000 UTC m=+1121.158351548" Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.096975 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerStarted","Data":"8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75"} Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.097961 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.100794 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztmll" event={"ID":"f8943d8a-337b-4852-9c11-55191a08a850","Type":"ContainerStarted","Data":"bb8bee75583f03091be99a3eb7b070a749409afcb16ccfe4ae7f61a996ce78c5"} Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.144591 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" podStartSLOduration=4.144575473 podStartE2EDuration="4.144575473s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:22.144184232 +0000 UTC m=+1122.028376948" watchObservedRunningTime="2026-02-02 10:57:22.144575473 +0000 UTC m=+1122.028768189" Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.168927 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ztmll" podStartSLOduration=4.16890975 podStartE2EDuration="4.16890975s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:22.165868353 +0000 UTC m=+1122.050061069" watchObservedRunningTime="2026-02-02 10:57:22.16890975 +0000 UTC m=+1122.053102456" Feb 02 10:57:22 crc kubenswrapper[4782]: I0202 10:57:22.833957 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c688f3-2e65-48cc-8394-c3b87053d840" path="/var/lib/kubelet/pods/08c688f3-2e65-48cc-8394-c3b87053d840/volumes" Feb 02 10:57:27 crc kubenswrapper[4782]: I0202 10:57:27.186441 4782 generic.go:334] "Generic (PLEG): container finished" podID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" containerID="d77ce47c81331449fe1e66732ce31fcd1c20618737ae71f8b83041e70b41f489" exitCode=0 Feb 02 10:57:27 crc kubenswrapper[4782]: I0202 10:57:27.186592 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv5hq" event={"ID":"c847df54-dabc-4a1c-a7dc-4d5c69b548fe","Type":"ContainerDied","Data":"d77ce47c81331449fe1e66732ce31fcd1c20618737ae71f8b83041e70b41f489"} Feb 02 10:57:29 crc kubenswrapper[4782]: I0202 10:57:29.147933 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:29 crc kubenswrapper[4782]: I0202 10:57:29.217981 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:57:29 crc kubenswrapper[4782]: I0202 10:57:29.218223 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" containerID="cri-o://79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a" gracePeriod=10 Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.251000 4782 generic.go:334] "Generic (PLEG): container finished" podID="ede109fe-b194-4a02-992d-f1132849fc0d" containerID="79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a" exitCode=0 Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.251079 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerDied","Data":"79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a"} Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.804164 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.944387 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.944506 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.944589 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.944985 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.945086 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.945171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") pod \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\" (UID: \"c847df54-dabc-4a1c-a7dc-4d5c69b548fe\") " Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.950392 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts" (OuterVolumeSpecName: "scripts") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.954618 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.956912 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd" (OuterVolumeSpecName: "kube-api-access-29bhd") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "kube-api-access-29bhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.972437 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.974879 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data" (OuterVolumeSpecName: "config-data") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:30 crc kubenswrapper[4782]: I0202 10:57:30.977206 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c847df54-dabc-4a1c-a7dc-4d5c69b548fe" (UID: "c847df54-dabc-4a1c-a7dc-4d5c69b548fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.047750 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048074 4782 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048085 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048096 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048105 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29bhd\" (UniqueName: \"kubernetes.io/projected/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-kube-api-access-29bhd\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.048114 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c847df54-dabc-4a1c-a7dc-4d5c69b548fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.262691 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wv5hq" event={"ID":"c847df54-dabc-4a1c-a7dc-4d5c69b548fe","Type":"ContainerDied","Data":"2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873"} Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.262736 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e18fa8df8a3d8e935c715dc0db626457c15bb2e1f177d6b66686bf1cf3ae873" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.262790 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wv5hq" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.886841 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.892590 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wv5hq"] Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.987592 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 10:57:31 crc kubenswrapper[4782]: E0202 10:57:31.988090 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" containerName="keystone-bootstrap" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.988114 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" containerName="keystone-bootstrap" Feb 02 10:57:31 crc kubenswrapper[4782]: E0202 10:57:31.988126 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c688f3-2e65-48cc-8394-c3b87053d840" containerName="init" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.988133 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c688f3-2e65-48cc-8394-c3b87053d840" containerName="init" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.988318 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c688f3-2e65-48cc-8394-c3b87053d840" containerName="init" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.988347 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" containerName="keystone-bootstrap" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.989138 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.991447 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.991668 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.991870 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.993926 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9pmlq" Feb 02 10:57:31 crc kubenswrapper[4782]: I0202 10:57:31.994091 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.002474 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.163739 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.163835 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.163862 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.164404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.164480 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.164531 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267251 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267736 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267875 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.267891 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.275227 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.275601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.283414 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.283789 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.288435 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.296374 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") pod \"keystone-bootstrap-t58qc\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.315510 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:32 crc kubenswrapper[4782]: I0202 10:57:32.833771 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c847df54-dabc-4a1c-a7dc-4d5c69b548fe" path="/var/lib/kubelet/pods/c847df54-dabc-4a1c-a7dc-4d5c69b548fe/volumes" Feb 02 10:57:33 crc kubenswrapper[4782]: I0202 10:57:33.469946 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Feb 02 10:57:37 crc kubenswrapper[4782]: I0202 10:57:37.332097 4782 generic.go:334] "Generic (PLEG): container finished" podID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" containerID="266185adfe7e4eb354941537aab95c70eb532acbac93a799d1b437d19b25b6c7" exitCode=0 Feb 02 10:57:37 crc kubenswrapper[4782]: I0202 10:57:37.332168 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx58" event={"ID":"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0","Type":"ContainerDied","Data":"266185adfe7e4eb354941537aab95c70eb532acbac93a799d1b437d19b25b6c7"} Feb 02 10:57:42 crc kubenswrapper[4782]: E0202 10:57:42.621674 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 02 10:57:42 crc kubenswrapper[4782]: E0202 10:57:42.622249 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbbw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-qjtml_openstack(14e3fab7-be93-409c-a88e-85c8d0ca533c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:57:42 crc kubenswrapper[4782]: E0202 10:57:42.623403 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-qjtml" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.723617 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.731293 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx58" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.887546 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") pod \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.887740 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") pod \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.887850 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") pod \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") pod \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\" (UID: \"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888729 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888901 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.888930 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.889047 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") pod \"ede109fe-b194-4a02-992d-f1132849fc0d\" (UID: \"ede109fe-b194-4a02-992d-f1132849fc0d\") " Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.893908 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" (UID: "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.894033 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb" (OuterVolumeSpecName: "kube-api-access-7xmdb") pod "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" (UID: "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0"). InnerVolumeSpecName "kube-api-access-7xmdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.895436 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn" (OuterVolumeSpecName: "kube-api-access-wkjmn") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "kube-api-access-wkjmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.914338 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" (UID: "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.943902 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.946461 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config" (OuterVolumeSpecName: "config") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.956316 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.962117 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data" (OuterVolumeSpecName: "config-data") pod "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" (UID: "1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.969705 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ede109fe-b194-4a02-992d-f1132849fc0d" (UID: "ede109fe-b194-4a02-992d-f1132849fc0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991369 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991723 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkjmn\" (UniqueName: \"kubernetes.io/projected/ede109fe-b194-4a02-992d-f1132849fc0d-kube-api-access-wkjmn\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991824 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991915 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.991995 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.992070 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.992153 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.992246 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xmdb\" (UniqueName: \"kubernetes.io/projected/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0-kube-api-access-7xmdb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:42 crc kubenswrapper[4782]: I0202 10:57:42.992331 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede109fe-b194-4a02-992d-f1132849fc0d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.384288 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bwx58" event={"ID":"1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0","Type":"ContainerDied","Data":"4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54"} Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.384336 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3dc46d0663b3ed61812db769ddcfdd9d2a4e53a6bf12f869fa34d7038f5e54" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.384478 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bwx58" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.387866 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" event={"ID":"ede109fe-b194-4a02-992d-f1132849fc0d","Type":"ContainerDied","Data":"70e38c18e7eeadf50540fc012954a93846a0fd6be83565a64c6ee300da9f11db"} Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.387915 4782 scope.go:117] "RemoveContainer" containerID="79490beed063daacfc93ca748659e8cb59165e9834ed5634b2a43d1cdfb9b23a" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.388020 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" Feb 02 10:57:43 crc kubenswrapper[4782]: E0202 10:57:43.389808 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-qjtml" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.438389 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.445448 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tfpvt"] Feb 02 10:57:43 crc kubenswrapper[4782]: I0202 10:57:43.470174 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tfpvt" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.317712 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:44 crc kubenswrapper[4782]: E0202 10:57:44.318150 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="init" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318165 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="init" Feb 02 10:57:44 crc kubenswrapper[4782]: E0202 10:57:44.318175 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318183 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" Feb 02 10:57:44 crc kubenswrapper[4782]: E0202 10:57:44.318200 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" containerName="glance-db-sync" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318209 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" containerName="glance-db-sync" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318407 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" containerName="dnsmasq-dns" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.318428 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" containerName="glance-db-sync" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.320031 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.372913 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422237 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422295 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422326 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422386 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.422450 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.523968 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524082 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524101 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524146 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.524921 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.525151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.525163 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.525589 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.548887 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") pod \"dnsmasq-dns-7987f74bbc-d8v8s\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.666268 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:44 crc kubenswrapper[4782]: I0202 10:57:44.830411 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede109fe-b194-4a02-992d-f1132849fc0d" path="/var/lib/kubelet/pods/ede109fe-b194-4a02-992d-f1132849fc0d/volumes" Feb 02 10:57:45 crc kubenswrapper[4782]: E0202 10:57:45.125319 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 10:57:45 crc kubenswrapper[4782]: E0202 10:57:45.125496 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpd9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rvrqj_openstack(bf4fe919-15fe-4478-be0f-8e3bf00147b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 10:57:45 crc kubenswrapper[4782]: E0202 10:57:45.126774 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rvrqj" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.151093 4782 scope.go:117] "RemoveContainer" containerID="ab258d10ba96d70b4cfb3ed122e2e498a9dd3427d5d89bbbbf656b5efa7359c1" Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.451719 4782 generic.go:334] "Generic (PLEG): container finished" podID="f8943d8a-337b-4852-9c11-55191a08a850" containerID="bb8bee75583f03091be99a3eb7b070a749409afcb16ccfe4ae7f61a996ce78c5" exitCode=0 Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.451777 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztmll" event={"ID":"f8943d8a-337b-4852-9c11-55191a08a850","Type":"ContainerDied","Data":"bb8bee75583f03091be99a3eb7b070a749409afcb16ccfe4ae7f61a996ce78c5"} Feb 02 10:57:45 crc kubenswrapper[4782]: E0202 10:57:45.453790 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-rvrqj" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" Feb 02 10:57:45 crc kubenswrapper[4782]: W0202 10:57:45.655298 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45d6513_2de0_4ece_bbbc_26c6780cd145.slice/crio-f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb WatchSource:0}: Error finding container f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb: Status 404 returned error can't find the container with id f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.661039 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.663448 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 10:57:45 crc kubenswrapper[4782]: I0202 10:57:45.757798 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.465928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.472866 4782 generic.go:334] "Generic (PLEG): container finished" podID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerID="8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27" exitCode=0 Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.473003 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerDied","Data":"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.473075 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerStarted","Data":"4d3202753bbc7ad4f1069d7c505ccba805f85fd5770fe99dd65abc48e1c13646"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.476290 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t58qc" event={"ID":"f45d6513-2de0-4ece-bbbc-26c6780cd145","Type":"ContainerStarted","Data":"f96dc9d1eca03acac5731eacf624fbd7091513cfed0cc461bda4976a5d7b4254"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.476362 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t58qc" event={"ID":"f45d6513-2de0-4ece-bbbc-26c6780cd145","Type":"ContainerStarted","Data":"f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.479319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zhdd" event={"ID":"173458b2-9a63-4456-9bc9-698d1414a679","Type":"ContainerStarted","Data":"882286d92ef94b177095925f1761989436448214282f382f07a04e273ec62549"} Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.537398 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t58qc" podStartSLOduration=15.537380106 podStartE2EDuration="15.537380106s" podCreationTimestamp="2026-02-02 10:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:46.533796244 +0000 UTC m=+1146.417988980" watchObservedRunningTime="2026-02-02 10:57:46.537380106 +0000 UTC m=+1146.421572822" Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.551214 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9zhdd" podStartSLOduration=3.426953622 podStartE2EDuration="28.551195412s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="2026-02-02 10:57:20.000443449 +0000 UTC m=+1119.884636175" lastFinishedPulling="2026-02-02 10:57:45.124685249 +0000 UTC m=+1145.008877965" observedRunningTime="2026-02-02 10:57:46.547677192 +0000 UTC m=+1146.431869918" watchObservedRunningTime="2026-02-02 10:57:46.551195412 +0000 UTC m=+1146.435388128" Feb 02 10:57:46 crc kubenswrapper[4782]: I0202 10:57:46.950585 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.077658 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") pod \"f8943d8a-337b-4852-9c11-55191a08a850\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.077763 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") pod \"f8943d8a-337b-4852-9c11-55191a08a850\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.077882 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") pod \"f8943d8a-337b-4852-9c11-55191a08a850\" (UID: \"f8943d8a-337b-4852-9c11-55191a08a850\") " Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.081563 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw" (OuterVolumeSpecName: "kube-api-access-frrfw") pod "f8943d8a-337b-4852-9c11-55191a08a850" (UID: "f8943d8a-337b-4852-9c11-55191a08a850"). InnerVolumeSpecName "kube-api-access-frrfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.104335 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8943d8a-337b-4852-9c11-55191a08a850" (UID: "f8943d8a-337b-4852-9c11-55191a08a850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.107109 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config" (OuterVolumeSpecName: "config") pod "f8943d8a-337b-4852-9c11-55191a08a850" (UID: "f8943d8a-337b-4852-9c11-55191a08a850"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.180243 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.180285 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frrfw\" (UniqueName: \"kubernetes.io/projected/f8943d8a-337b-4852-9c11-55191a08a850-kube-api-access-frrfw\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.180296 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f8943d8a-337b-4852-9c11-55191a08a850-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.490507 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9"} Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.496792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerStarted","Data":"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf"} Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.497383 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.499852 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ztmll" event={"ID":"f8943d8a-337b-4852-9c11-55191a08a850","Type":"ContainerDied","Data":"4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194"} Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.499908 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eeaf35a3a5ede4d6f2a9d74b8e11dba599b16a4d837fbb2ca932a313cf40194" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.500094 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ztmll" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.543885 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" podStartSLOduration=3.543864417 podStartE2EDuration="3.543864417s" podCreationTimestamp="2026-02-02 10:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:47.528367353 +0000 UTC m=+1147.412560089" watchObservedRunningTime="2026-02-02 10:57:47.543864417 +0000 UTC m=+1147.428057133" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.718719 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.775757 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:57:47 crc kubenswrapper[4782]: E0202 10:57:47.776093 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8943d8a-337b-4852-9c11-55191a08a850" containerName="neutron-db-sync" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.776106 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8943d8a-337b-4852-9c11-55191a08a850" containerName="neutron-db-sync" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.776260 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8943d8a-337b-4852-9c11-55191a08a850" containerName="neutron-db-sync" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.787164 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.825656 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.898778 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.898870 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.898903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.899020 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.899046 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.925228 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.927565 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.936315 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ntkkh" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.936675 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.936832 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.936978 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 10:57:47 crc kubenswrapper[4782]: I0202 10:57:47.979011 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.001439 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.001497 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.002457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.002544 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.003342 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.003282 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.003925 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.003975 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.004054 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.004085 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.004157 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.004219 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.005154 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.010383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.049688 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") pod \"dnsmasq-dns-7b946d459c-pbdmr\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.105898 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.105978 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.106004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.106041 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.106067 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.113028 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.113819 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.128299 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.128893 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.130580 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.136877 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") pod \"neutron-6c4497f454-mphzd\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.267236 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:48 crc kubenswrapper[4782]: I0202 10:57:48.948175 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.276075 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:57:49 crc kubenswrapper[4782]: W0202 10:57:49.282559 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64a58e87_7403_40ee_804f_3ddd256a166a.slice/crio-320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf WatchSource:0}: Error finding container 320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf: Status 404 returned error can't find the container with id 320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.530119 4782 generic.go:334] "Generic (PLEG): container finished" podID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerID="307a5c49cf6e90bbab2ae7599314e3e57ac09662374b82e043747eec646d2bdd" exitCode=0 Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.530169 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerDied","Data":"307a5c49cf6e90bbab2ae7599314e3e57ac09662374b82e043747eec646d2bdd"} Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.530212 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerStarted","Data":"d2d57fff99a40c3d971a276c962ad40364a2dc18610c2d3bd9d74bd06dd02f62"} Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.544293 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerStarted","Data":"320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf"} Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.560556 4782 generic.go:334] "Generic (PLEG): container finished" podID="173458b2-9a63-4456-9bc9-698d1414a679" containerID="882286d92ef94b177095925f1761989436448214282f382f07a04e273ec62549" exitCode=0 Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.560757 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="dnsmasq-dns" containerID="cri-o://49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" gracePeriod=10 Feb 02 10:57:49 crc kubenswrapper[4782]: I0202 10:57:49.560971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zhdd" event={"ID":"173458b2-9a63-4456-9bc9-698d1414a679","Type":"ContainerDied","Data":"882286d92ef94b177095925f1761989436448214282f382f07a04e273ec62549"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.038591 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068527 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068620 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068668 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068710 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.068759 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") pod \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\" (UID: \"3a78ac20-6473-4217-aa2d-3d2b4f03023b\") " Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.081484 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz" (OuterVolumeSpecName: "kube-api-access-ch7gz") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "kube-api-access-ch7gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.174527 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch7gz\" (UniqueName: \"kubernetes.io/projected/3a78ac20-6473-4217-aa2d-3d2b4f03023b-kube-api-access-ch7gz\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.189108 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.192914 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config" (OuterVolumeSpecName: "config") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.196518 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.197447 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a78ac20-6473-4217-aa2d-3d2b4f03023b" (UID: "3a78ac20-6473-4217-aa2d-3d2b4f03023b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.276996 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.277031 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.277048 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.277057 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a78ac20-6473-4217-aa2d-3d2b4f03023b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.466793 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:57:50 crc kubenswrapper[4782]: E0202 10:57:50.467160 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="init" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.467186 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="init" Feb 02 10:57:50 crc kubenswrapper[4782]: E0202 10:57:50.467211 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="dnsmasq-dns" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.467219 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="dnsmasq-dns" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.471497 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerName="dnsmasq-dns" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.472402 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.475305 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.475462 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.479962 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480352 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480427 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480572 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.480691 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.485425 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.580783 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerStarted","Data":"57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581118 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581829 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581860 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.581967 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.582019 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.582050 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.582084 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.590037 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.590798 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerStarted","Data":"0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.590831 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerStarted","Data":"d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.591253 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.591387 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.592057 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.598817 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.603706 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.609712 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.625195 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" podStartSLOduration=3.625177067 podStartE2EDuration="3.625177067s" podCreationTimestamp="2026-02-02 10:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:50.62388586 +0000 UTC m=+1150.508078586" watchObservedRunningTime="2026-02-02 10:57:50.625177067 +0000 UTC m=+1150.509369773" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.628981 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") pod \"neutron-644b87c8cc-7cfbr\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.640853 4782 generic.go:334] "Generic (PLEG): container finished" podID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" containerID="49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" exitCode=0 Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.641017 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.641074 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerDied","Data":"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.641106 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-d8v8s" event={"ID":"3a78ac20-6473-4217-aa2d-3d2b4f03023b","Type":"ContainerDied","Data":"4d3202753bbc7ad4f1069d7c505ccba805f85fd5770fe99dd65abc48e1c13646"} Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.641129 4782 scope.go:117] "RemoveContainer" containerID="49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.727732 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c4497f454-mphzd" podStartSLOduration=3.727709506 podStartE2EDuration="3.727709506s" podCreationTimestamp="2026-02-02 10:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:50.65319539 +0000 UTC m=+1150.537388106" watchObservedRunningTime="2026-02-02 10:57:50.727709506 +0000 UTC m=+1150.611902222" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.758553 4782 scope.go:117] "RemoveContainer" containerID="8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.770239 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.779557 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-d8v8s"] Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.793469 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.828786 4782 scope.go:117] "RemoveContainer" containerID="49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" Feb 02 10:57:50 crc kubenswrapper[4782]: E0202 10:57:50.830267 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf\": container with ID starting with 49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf not found: ID does not exist" containerID="49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.830329 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf"} err="failed to get container status \"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf\": rpc error: code = NotFound desc = could not find container \"49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf\": container with ID starting with 49be3972beea78b8ed02b289136357ca04f17b61fc078a2a47e64b8b6ad531bf not found: ID does not exist" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.830354 4782 scope.go:117] "RemoveContainer" containerID="8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27" Feb 02 10:57:50 crc kubenswrapper[4782]: E0202 10:57:50.847456 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27\": container with ID starting with 8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27 not found: ID does not exist" containerID="8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.847518 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27"} err="failed to get container status \"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27\": rpc error: code = NotFound desc = could not find container \"8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27\": container with ID starting with 8a0872c1a29cedeb49eb6758f04a657ecac417d050239d59568226587c752e27 not found: ID does not exist" Feb 02 10:57:50 crc kubenswrapper[4782]: I0202 10:57:50.896279 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a78ac20-6473-4217-aa2d-3d2b4f03023b" path="/var/lib/kubelet/pods/3a78ac20-6473-4217-aa2d-3d2b4f03023b/volumes" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.145448 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316140 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316191 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316313 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316387 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.316421 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") pod \"173458b2-9a63-4456-9bc9-698d1414a679\" (UID: \"173458b2-9a63-4456-9bc9-698d1414a679\") " Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.318918 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs" (OuterVolumeSpecName: "logs") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.328844 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc" (OuterVolumeSpecName: "kube-api-access-56spc") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "kube-api-access-56spc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.330910 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts" (OuterVolumeSpecName: "scripts") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.358158 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.374929 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data" (OuterVolumeSpecName: "config-data") pod "173458b2-9a63-4456-9bc9-698d1414a679" (UID: "173458b2-9a63-4456-9bc9-698d1414a679"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419429 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419456 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/173458b2-9a63-4456-9bc9-698d1414a679-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419466 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419477 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56spc\" (UniqueName: \"kubernetes.io/projected/173458b2-9a63-4456-9bc9-698d1414a679-kube-api-access-56spc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.419486 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173458b2-9a63-4456-9bc9-698d1414a679-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.662565 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zhdd" event={"ID":"173458b2-9a63-4456-9bc9-698d1414a679","Type":"ContainerDied","Data":"84c341193c47fc4aa8a47eed674765c2cf34eb70060671ad9bf767eb2f34ee7a"} Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.662889 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c341193c47fc4aa8a47eed674765c2cf34eb70060671ad9bf767eb2f34ee7a" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.662956 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zhdd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.695897 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:57:51 crc kubenswrapper[4782]: E0202 10:57:51.696228 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173458b2-9a63-4456-9bc9-698d1414a679" containerName="placement-db-sync" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.696243 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="173458b2-9a63-4456-9bc9-698d1414a679" containerName="placement-db-sync" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.696465 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="173458b2-9a63-4456-9bc9-698d1414a679" containerName="placement-db-sync" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.697941 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.704554 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.705317 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.705491 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.705677 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.706957 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k6962" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.728008 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834215 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834270 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834321 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834402 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834442 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834497 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.834512 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936231 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936293 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936329 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936409 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936440 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.936487 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.937854 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.943539 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.945208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.945791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.946444 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.955341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:51 crc kubenswrapper[4782]: I0202 10:57:51.957796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") pod \"placement-54577c875b-pcjgd\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:52 crc kubenswrapper[4782]: I0202 10:57:52.037287 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:52 crc kubenswrapper[4782]: I0202 10:57:52.138975 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:57:52 crc kubenswrapper[4782]: I0202 10:57:52.671976 4782 generic.go:334] "Generic (PLEG): container finished" podID="f45d6513-2de0-4ece-bbbc-26c6780cd145" containerID="f96dc9d1eca03acac5731eacf624fbd7091513cfed0cc461bda4976a5d7b4254" exitCode=0 Feb 02 10:57:52 crc kubenswrapper[4782]: I0202 10:57:52.672059 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t58qc" event={"ID":"f45d6513-2de0-4ece-bbbc-26c6780cd145","Type":"ContainerDied","Data":"f96dc9d1eca03acac5731eacf624fbd7091513cfed0cc461bda4976a5d7b4254"} Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.695122 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t58qc" event={"ID":"f45d6513-2de0-4ece-bbbc-26c6780cd145","Type":"ContainerDied","Data":"f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb"} Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.695635 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82234fec9aa85ae5a4b7eb150c4046735b933ad52170747b999d2700ebd8ccb" Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.723820 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerStarted","Data":"8889fd515dbda34f10b34a65e145848adfe2d17e55c2e3acb24297eefee67df3"} Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.808069 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991210 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991651 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991687 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991740 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991782 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.991809 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") pod \"f45d6513-2de0-4ece-bbbc-26c6780cd145\" (UID: \"f45d6513-2de0-4ece-bbbc-26c6780cd145\") " Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.995780 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:54 crc kubenswrapper[4782]: I0202 10:57:54.997301 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts" (OuterVolumeSpecName: "scripts") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.006761 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.011961 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh" (OuterVolumeSpecName: "kube-api-access-464nh") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "kube-api-access-464nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.022336 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data" (OuterVolumeSpecName: "config-data") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.028415 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f45d6513-2de0-4ece-bbbc-26c6780cd145" (UID: "f45d6513-2de0-4ece-bbbc-26c6780cd145"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.093978 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094013 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094023 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-464nh\" (UniqueName: \"kubernetes.io/projected/f45d6513-2de0-4ece-bbbc-26c6780cd145-kube-api-access-464nh\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094032 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094041 4782 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.094048 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f45d6513-2de0-4ece-bbbc-26c6780cd145-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.124307 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:57:55 crc kubenswrapper[4782]: W0202 10:57:55.125608 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060c1eb2_7773_4122_8725_bf421f0feaac.slice/crio-cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7 WatchSource:0}: Error finding container cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7: Status 404 returned error can't find the container with id cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7 Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.735183 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.737830 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerStarted","Data":"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.737868 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerStarted","Data":"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.737883 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerStarted","Data":"cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.738203 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.738229 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.739965 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t58qc" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.744012 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerStarted","Data":"f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.744049 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerStarted","Data":"786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53"} Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.744179 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.763194 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-54577c875b-pcjgd" podStartSLOduration=4.763175421 podStartE2EDuration="4.763175421s" podCreationTimestamp="2026-02-02 10:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:55.763095099 +0000 UTC m=+1155.647287835" watchObservedRunningTime="2026-02-02 10:57:55.763175421 +0000 UTC m=+1155.647368137" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.802369 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-644b87c8cc-7cfbr" podStartSLOduration=5.802348224 podStartE2EDuration="5.802348224s" podCreationTimestamp="2026-02-02 10:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:55.791807422 +0000 UTC m=+1155.676000138" watchObservedRunningTime="2026-02-02 10:57:55.802348224 +0000 UTC m=+1155.686540950" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.987198 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79d66b847-whsks"] Feb 02 10:57:55 crc kubenswrapper[4782]: E0202 10:57:55.987876 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45d6513-2de0-4ece-bbbc-26c6780cd145" containerName="keystone-bootstrap" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.987977 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45d6513-2de0-4ece-bbbc-26c6780cd145" containerName="keystone-bootstrap" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.988180 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45d6513-2de0-4ece-bbbc-26c6780cd145" containerName="keystone-bootstrap" Feb 02 10:57:55 crc kubenswrapper[4782]: I0202 10:57:55.988743 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002088 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002235 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002333 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002590 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002850 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.002977 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9pmlq" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.084691 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79d66b847-whsks"] Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.119539 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-scripts\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.119873 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-credential-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.119891 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-internal-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.119934 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-config-data\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.120009 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-fernet-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.120030 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl5sf\" (UniqueName: \"kubernetes.io/projected/df4aa6a3-22bf-459c-becf-3685a170ae22-kube-api-access-wl5sf\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.120050 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-public-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.120091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-combined-ca-bundle\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236045 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-config-data\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236225 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-fernet-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236292 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-public-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236386 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl5sf\" (UniqueName: \"kubernetes.io/projected/df4aa6a3-22bf-459c-becf-3685a170ae22-kube-api-access-wl5sf\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236681 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-combined-ca-bundle\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.236878 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-scripts\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.237004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-credential-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.237051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-internal-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.247339 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-combined-ca-bundle\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.248292 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-config-data\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.248947 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-fernet-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.250092 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-public-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.255890 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-scripts\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.255894 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-credential-keys\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.256164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4aa6a3-22bf-459c-becf-3685a170ae22-internal-tls-certs\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.259088 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl5sf\" (UniqueName: \"kubernetes.io/projected/df4aa6a3-22bf-459c-becf-3685a170ae22-kube-api-access-wl5sf\") pod \"keystone-79d66b847-whsks\" (UID: \"df4aa6a3-22bf-459c-becf-3685a170ae22\") " pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:56 crc kubenswrapper[4782]: I0202 10:57:56.318208 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.459003 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79d66b847-whsks"] Feb 02 10:57:57 crc kubenswrapper[4782]: W0202 10:57:57.463310 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4aa6a3_22bf_459c_becf_3685a170ae22.slice/crio-e94bcc6fccde95efd30deab4e735959e10cbdfa223d7a197a11fa7bd2dc73b7b WatchSource:0}: Error finding container e94bcc6fccde95efd30deab4e735959e10cbdfa223d7a197a11fa7bd2dc73b7b: Status 404 returned error can't find the container with id e94bcc6fccde95efd30deab4e735959e10cbdfa223d7a197a11fa7bd2dc73b7b Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.765729 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79d66b847-whsks" event={"ID":"df4aa6a3-22bf-459c-becf-3685a170ae22","Type":"ContainerStarted","Data":"56a1f84f5103d341659155875850f80e7d181fa0691ff0a747d748709cb782f0"} Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.765784 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79d66b847-whsks" event={"ID":"df4aa6a3-22bf-459c-becf-3685a170ae22","Type":"ContainerStarted","Data":"e94bcc6fccde95efd30deab4e735959e10cbdfa223d7a197a11fa7bd2dc73b7b"} Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.766058 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79d66b847-whsks" Feb 02 10:57:57 crc kubenswrapper[4782]: I0202 10:57:57.808286 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79d66b847-whsks" podStartSLOduration=2.808260476 podStartE2EDuration="2.808260476s" podCreationTimestamp="2026-02-02 10:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:57:57.798196958 +0000 UTC m=+1157.682389694" watchObservedRunningTime="2026-02-02 10:57:57.808260476 +0000 UTC m=+1157.692453202" Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.116074 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.197019 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.197352 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="dnsmasq-dns" containerID="cri-o://8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75" gracePeriod=10 Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.782787 4782 generic.go:334] "Generic (PLEG): container finished" podID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerID="8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75" exitCode=0 Feb 02 10:57:58 crc kubenswrapper[4782]: I0202 10:57:58.782868 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerDied","Data":"8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75"} Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.020209 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118564 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118665 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118708 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118759 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.118808 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") pod \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\" (UID: \"bbab971e-9d4a-4d47-b466-ec2110de7dfb\") " Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.124083 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p" (OuterVolumeSpecName: "kube-api-access-t9t5p") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "kube-api-access-t9t5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.196534 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.206249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.211057 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.221066 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.221111 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9t5p\" (UniqueName: \"kubernetes.io/projected/bbab971e-9d4a-4d47-b466-ec2110de7dfb-kube-api-access-t9t5p\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.221128 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.221180 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.292015 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config" (OuterVolumeSpecName: "config") pod "bbab971e-9d4a-4d47-b466-ec2110de7dfb" (UID: "bbab971e-9d4a-4d47-b466-ec2110de7dfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.322839 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab971e-9d4a-4d47-b466-ec2110de7dfb-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.797895 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" event={"ID":"bbab971e-9d4a-4d47-b466-ec2110de7dfb","Type":"ContainerDied","Data":"1c11d42eec17c1c7713f79d1bb2871fdfc39558c452b6a5339d9e0c5f17ef2bf"} Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.797961 4782 scope.go:117] "RemoveContainer" containerID="8739c7eaea0f7605c65d98c62cce07647aacbed0043275eb2f4dd317c1bafd75" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.798087 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-tg7wz" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.804350 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qjtml" event={"ID":"14e3fab7-be93-409c-a88e-85c8d0ca533c","Type":"ContainerStarted","Data":"86ae63a42dd213a82d90c920d379402488562da05112fd3a36da50fdfc632f7d"} Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.837102 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qjtml" podStartSLOduration=2.290081222 podStartE2EDuration="41.837079724s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="2026-02-02 10:57:19.958810526 +0000 UTC m=+1119.843003242" lastFinishedPulling="2026-02-02 10:57:59.505809028 +0000 UTC m=+1159.390001744" observedRunningTime="2026-02-02 10:57:59.816678759 +0000 UTC m=+1159.700871485" watchObservedRunningTime="2026-02-02 10:57:59.837079724 +0000 UTC m=+1159.721272460" Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.846956 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.853885 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-tg7wz"] Feb 02 10:57:59 crc kubenswrapper[4782]: I0202 10:57:59.857693 4782 scope.go:117] "RemoveContainer" containerID="d612f10c6156f6cb4afac9aec45e071dc15d31ca60fe0b15bf367f1991040e4f" Feb 02 10:58:00 crc kubenswrapper[4782]: I0202 10:58:00.833858 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" path="/var/lib/kubelet/pods/bbab971e-9d4a-4d47-b466-ec2110de7dfb/volumes" Feb 02 10:58:00 crc kubenswrapper[4782]: I0202 10:58:00.834413 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvrqj" event={"ID":"bf4fe919-15fe-4478-be0f-8e3bf00147b4","Type":"ContainerStarted","Data":"e47203a1a44b3b88fecb28ffdf42000d4b85a4d8f915c7dc05cd21438f5304c4"} Feb 02 10:58:00 crc kubenswrapper[4782]: I0202 10:58:00.863045 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rvrqj" podStartSLOduration=4.099556453 podStartE2EDuration="42.863022124s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="2026-02-02 10:57:20.740264177 +0000 UTC m=+1120.624456893" lastFinishedPulling="2026-02-02 10:57:59.503729838 +0000 UTC m=+1159.387922564" observedRunningTime="2026-02-02 10:58:00.85520343 +0000 UTC m=+1160.739396146" watchObservedRunningTime="2026-02-02 10:58:00.863022124 +0000 UTC m=+1160.747214840" Feb 02 10:58:03 crc kubenswrapper[4782]: I0202 10:58:03.851772 4782 generic.go:334] "Generic (PLEG): container finished" podID="14e3fab7-be93-409c-a88e-85c8d0ca533c" containerID="86ae63a42dd213a82d90c920d379402488562da05112fd3a36da50fdfc632f7d" exitCode=0 Feb 02 10:58:03 crc kubenswrapper[4782]: I0202 10:58:03.851825 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qjtml" event={"ID":"14e3fab7-be93-409c-a88e-85c8d0ca533c","Type":"ContainerDied","Data":"86ae63a42dd213a82d90c920d379402488562da05112fd3a36da50fdfc632f7d"} Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.408072 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qjtml" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.555107 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") pod \"14e3fab7-be93-409c-a88e-85c8d0ca533c\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.555699 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") pod \"14e3fab7-be93-409c-a88e-85c8d0ca533c\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.555730 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") pod \"14e3fab7-be93-409c-a88e-85c8d0ca533c\" (UID: \"14e3fab7-be93-409c-a88e-85c8d0ca533c\") " Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.559220 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "14e3fab7-be93-409c-a88e-85c8d0ca533c" (UID: "14e3fab7-be93-409c-a88e-85c8d0ca533c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.559374 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7" (OuterVolumeSpecName: "kube-api-access-jbbw7") pod "14e3fab7-be93-409c-a88e-85c8d0ca533c" (UID: "14e3fab7-be93-409c-a88e-85c8d0ca533c"). InnerVolumeSpecName "kube-api-access-jbbw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.578025 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14e3fab7-be93-409c-a88e-85c8d0ca533c" (UID: "14e3fab7-be93-409c-a88e-85c8d0ca533c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.658093 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.658139 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbw7\" (UniqueName: \"kubernetes.io/projected/14e3fab7-be93-409c-a88e-85c8d0ca533c-kube-api-access-jbbw7\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.658149 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e3fab7-be93-409c-a88e-85c8d0ca533c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.881150 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qjtml" event={"ID":"14e3fab7-be93-409c-a88e-85c8d0ca533c","Type":"ContainerDied","Data":"cf01f314448485ff21bcd2728c714dedb197b922c6d0f496ca141e9405a41bab"} Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.881227 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf01f314448485ff21bcd2728c714dedb197b922c6d0f496ca141e9405a41bab" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.881255 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qjtml" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.885995 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerStarted","Data":"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96"} Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886232 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" containerID="cri-o://140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" gracePeriod=30 Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886255 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="sg-core" containerID="cri-o://204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" gracePeriod=30 Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886247 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-central-agent" containerID="cri-o://cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" gracePeriod=30 Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886240 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.886280 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-notification-agent" containerID="cri-o://64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" gracePeriod=30 Feb 02 10:58:06 crc kubenswrapper[4782]: I0202 10:58:06.927206 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.157918549 podStartE2EDuration="48.926992622s" podCreationTimestamp="2026-02-02 10:57:18 +0000 UTC" firstStartedPulling="2026-02-02 10:57:20.701943799 +0000 UTC m=+1120.586136515" lastFinishedPulling="2026-02-02 10:58:06.471017872 +0000 UTC m=+1166.355210588" observedRunningTime="2026-02-02 10:58:06.924336346 +0000 UTC m=+1166.808529062" watchObservedRunningTime="2026-02-02 10:58:06.926992622 +0000 UTC m=+1166.811185338" Feb 02 10:58:07 crc kubenswrapper[4782]: E0202 10:58:07.226406 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb720ee_de8d_42e4_b189_aa3d58478ab9.slice/crio-conmon-cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.729164 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b54d776c6-xrdvf"] Feb 02 10:58:07 crc kubenswrapper[4782]: E0202 10:58:07.729836 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" containerName="barbican-db-sync" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.729849 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" containerName="barbican-db-sync" Feb 02 10:58:07 crc kubenswrapper[4782]: E0202 10:58:07.729870 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="dnsmasq-dns" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.729878 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="dnsmasq-dns" Feb 02 10:58:07 crc kubenswrapper[4782]: E0202 10:58:07.729890 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="init" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.729896 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="init" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.730060 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" containerName="barbican-db-sync" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.730080 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbab971e-9d4a-4d47-b466-ec2110de7dfb" containerName="dnsmasq-dns" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.730951 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.740506 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tpp6m" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.740748 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.740989 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.753535 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5bbfd966d5-c6jc5"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.755166 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.757894 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777750 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-logs\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777806 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777851 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2f7g\" (UniqueName: \"kubernetes.io/projected/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-kube-api-access-r2f7g\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-combined-ca-bundle\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.777937 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data-custom\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.790397 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b54d776c6-xrdvf"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.817923 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bbfd966d5-c6jc5"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.877713 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.878995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data-custom\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879018 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879044 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-logs\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879138 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-combined-ca-bundle\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879173 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658rt\" (UniqueName: \"kubernetes.io/projected/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-kube-api-access-658rt\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879204 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879243 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-logs\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879296 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2f7g\" (UniqueName: \"kubernetes.io/projected/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-kube-api-access-r2f7g\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879369 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data-custom\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.879409 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-combined-ca-bundle\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.886377 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data-custom\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.888198 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-logs\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.893070 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-config-data\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.897116 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.901520 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-combined-ca-bundle\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.917287 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2f7g\" (UniqueName: \"kubernetes.io/projected/ea0f5849-bbf6-4184-8b8c-8e11cd8da661-kube-api-access-r2f7g\") pod \"barbican-keystone-listener-6b54d776c6-xrdvf\" (UID: \"ea0f5849-bbf6-4184-8b8c-8e11cd8da661\") " pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.917653 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f"} Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.919817 4782 generic.go:334] "Generic (PLEG): container finished" podID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerID="204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" exitCode=2 Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.919851 4782 generic.go:334] "Generic (PLEG): container finished" podID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerID="cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" exitCode=0 Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.919883 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08"} Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.982681 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-combined-ca-bundle\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983017 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-658rt\" (UniqueName: \"kubernetes.io/projected/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-kube-api-access-658rt\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-logs\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983358 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983445 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983507 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983580 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983681 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data-custom\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983768 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.983913 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.985099 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-logs\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.991517 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-combined-ca-bundle\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:07 crc kubenswrapper[4782]: I0202 10:58:07.996105 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.010379 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-config-data-custom\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.020489 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-658rt\" (UniqueName: \"kubernetes.io/projected/141e9d68-e6ef-441d-aede-3bb1fdcc4d5f-kube-api-access-658rt\") pod \"barbican-worker-5bbfd966d5-c6jc5\" (UID: \"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f\") " pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.029891 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.031163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.035617 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.053079 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.073398 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.084000 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.085886 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.085937 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.085980 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086015 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086045 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086061 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086086 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086114 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086143 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.086206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.094629 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.094706 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.094896 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.098498 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.128385 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") pod \"dnsmasq-dns-6bb684768f-q57sq\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187665 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187774 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187810 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187831 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.187852 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.189413 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.195948 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.197755 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.199249 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.221230 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") pod \"barbican-api-5b7797d578-tmg69\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.284920 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.424584 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.930459 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b54d776c6-xrdvf"] Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.935679 4782 generic.go:334] "Generic (PLEG): container finished" podID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerID="64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" exitCode=0 Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.935727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9"} Feb 02 10:58:08 crc kubenswrapper[4782]: W0202 10:58:08.944630 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea0f5849_bbf6_4184_8b8c_8e11cd8da661.slice/crio-4ba958d5e10b634a80212fd66245e6abcc7f4c4345be8d8ee5c1dfc0dd0a0985 WatchSource:0}: Error finding container 4ba958d5e10b634a80212fd66245e6abcc7f4c4345be8d8ee5c1dfc0dd0a0985: Status 404 returned error can't find the container with id 4ba958d5e10b634a80212fd66245e6abcc7f4c4345be8d8ee5c1dfc0dd0a0985 Feb 02 10:58:08 crc kubenswrapper[4782]: I0202 10:58:08.980504 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5bbfd966d5-c6jc5"] Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.184722 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.283091 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.955507 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerStarted","Data":"de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.956047 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerStarted","Data":"985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.956075 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.956088 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerStarted","Data":"486044541d5c070266883b9d8c5a598bb41438b6bc2f68afedb4cd643ff3c9ee"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.958232 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" event={"ID":"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f","Type":"ContainerStarted","Data":"5c3401638f23752d95df9b5b67a3bf8e7509f28b511cb239856400db8f006025"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.962617 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" event={"ID":"ea0f5849-bbf6-4184-8b8c-8e11cd8da661","Type":"ContainerStarted","Data":"4ba958d5e10b634a80212fd66245e6abcc7f4c4345be8d8ee5c1dfc0dd0a0985"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.965874 4782 generic.go:334] "Generic (PLEG): container finished" podID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" containerID="e47203a1a44b3b88fecb28ffdf42000d4b85a4d8f915c7dc05cd21438f5304c4" exitCode=0 Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.966035 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvrqj" event={"ID":"bf4fe919-15fe-4478-be0f-8e3bf00147b4","Type":"ContainerDied","Data":"e47203a1a44b3b88fecb28ffdf42000d4b85a4d8f915c7dc05cd21438f5304c4"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.967402 4782 generic.go:334] "Generic (PLEG): container finished" podID="b226dd37-b5b5-4514-9495-944db6e760ed" containerID="fd4e9ce89e9962fa17cc57f028ec76e94e318eabc07d088e7fecd0989f0c912c" exitCode=0 Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.967447 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerDied","Data":"fd4e9ce89e9962fa17cc57f028ec76e94e318eabc07d088e7fecd0989f0c912c"} Feb 02 10:58:09 crc kubenswrapper[4782]: I0202 10:58:09.967468 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerStarted","Data":"42049f171b77a573283352f44491fb48841eb34d9cc4039ea25a8c1b150ccf44"} Feb 02 10:58:10 crc kubenswrapper[4782]: I0202 10:58:10.001736 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b7797d578-tmg69" podStartSLOduration=3.001716502 podStartE2EDuration="3.001716502s" podCreationTimestamp="2026-02-02 10:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:09.983028846 +0000 UTC m=+1169.867221552" watchObservedRunningTime="2026-02-02 10:58:10.001716502 +0000 UTC m=+1169.885909218" Feb 02 10:58:10 crc kubenswrapper[4782]: I0202 10:58:10.982138 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.101127 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77c4d8f8d8-7qmjv"] Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.103384 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.106306 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.107546 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.144554 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77c4d8f8d8-7qmjv"] Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175711 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-public-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175742 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data-custom\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98nrz\" (UniqueName: \"kubernetes.io/projected/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-kube-api-access-98nrz\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175843 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-internal-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175870 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-combined-ca-bundle\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.175889 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-logs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277692 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-internal-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277750 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-combined-ca-bundle\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277775 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-logs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277851 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-public-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.277873 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data-custom\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.278240 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98nrz\" (UniqueName: \"kubernetes.io/projected/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-kube-api-access-98nrz\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.290674 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-logs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.300564 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-combined-ca-bundle\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.320950 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98nrz\" (UniqueName: \"kubernetes.io/projected/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-kube-api-access-98nrz\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.321789 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-public-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.322718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data-custom\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.323714 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-config-data\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.339829 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b9ad9f-f95d-4839-9531-4f0f11ca86ff-internal-tls-certs\") pod \"barbican-api-77c4d8f8d8-7qmjv\" (UID: \"52b9ad9f-f95d-4839-9531-4f0f11ca86ff\") " pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.534542 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.727658 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.802744 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.802880 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.802938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.802990 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.803010 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.803097 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") pod \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\" (UID: \"bf4fe919-15fe-4478-be0f-8e3bf00147b4\") " Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.804309 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.807740 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.809230 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r" (OuterVolumeSpecName: "kube-api-access-cpd9r") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "kube-api-access-cpd9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.810800 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts" (OuterVolumeSpecName: "scripts") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.853589 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.884892 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data" (OuterVolumeSpecName: "config-data") pod "bf4fe919-15fe-4478-be0f-8e3bf00147b4" (UID: "bf4fe919-15fe-4478-be0f-8e3bf00147b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905836 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf4fe919-15fe-4478-be0f-8e3bf00147b4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905878 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905891 4782 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905902 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905943 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpd9r\" (UniqueName: \"kubernetes.io/projected/bf4fe919-15fe-4478-be0f-8e3bf00147b4-kube-api-access-cpd9r\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.905957 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4fe919-15fe-4478-be0f-8e3bf00147b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.992892 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" event={"ID":"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f","Type":"ContainerStarted","Data":"c4b7a9fca7bbe65ad62776dcc2946b55ab5068890b3eed0e7ba1c4e7cd70b780"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.992931 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" event={"ID":"141e9d68-e6ef-441d-aede-3bb1fdcc4d5f","Type":"ContainerStarted","Data":"691965a28ee6d33d5f274ea50fd745ad1d0d692a3a18694fed1901bca1389b85"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.994720 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" event={"ID":"ea0f5849-bbf6-4184-8b8c-8e11cd8da661","Type":"ContainerStarted","Data":"9ffe45871c20724a52164e96ffc54f08b31fbb9784845d325e9355ca338db887"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.994757 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" event={"ID":"ea0f5849-bbf6-4184-8b8c-8e11cd8da661","Type":"ContainerStarted","Data":"ded6bfd32cc21311a5b7d2538d7c1590f1501ff96c1a6faf86e998fabc099321"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.998219 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvrqj" event={"ID":"bf4fe919-15fe-4478-be0f-8e3bf00147b4","Type":"ContainerDied","Data":"2b88f70f23d6438ad4880535e90d03f7ddcd1e6596512bcb845cbde82cf71a29"} Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.998240 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b88f70f23d6438ad4880535e90d03f7ddcd1e6596512bcb845cbde82cf71a29" Feb 02 10:58:11 crc kubenswrapper[4782]: I0202 10:58:11.998253 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvrqj" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.001805 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerStarted","Data":"69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001"} Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.001851 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.014422 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5bbfd966d5-c6jc5" podStartSLOduration=3.078363129 podStartE2EDuration="5.014405148s" podCreationTimestamp="2026-02-02 10:58:07 +0000 UTC" firstStartedPulling="2026-02-02 10:58:09.000208383 +0000 UTC m=+1168.884401109" lastFinishedPulling="2026-02-02 10:58:10.936250412 +0000 UTC m=+1170.820443128" observedRunningTime="2026-02-02 10:58:12.007516871 +0000 UTC m=+1171.891709587" watchObservedRunningTime="2026-02-02 10:58:12.014405148 +0000 UTC m=+1171.898597864" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.037359 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" podStartSLOduration=5.037339056 podStartE2EDuration="5.037339056s" podCreationTimestamp="2026-02-02 10:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:12.024154348 +0000 UTC m=+1171.908347074" watchObservedRunningTime="2026-02-02 10:58:12.037339056 +0000 UTC m=+1171.921531772" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.060885 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b54d776c6-xrdvf" podStartSLOduration=3.071554824 podStartE2EDuration="5.06086911s" podCreationTimestamp="2026-02-02 10:58:07 +0000 UTC" firstStartedPulling="2026-02-02 10:58:08.949082057 +0000 UTC m=+1168.833274773" lastFinishedPulling="2026-02-02 10:58:10.938396343 +0000 UTC m=+1170.822589059" observedRunningTime="2026-02-02 10:58:12.054765635 +0000 UTC m=+1171.938958351" watchObservedRunningTime="2026-02-02 10:58:12.06086911 +0000 UTC m=+1171.945061826" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.098579 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77c4d8f8d8-7qmjv"] Feb 02 10:58:12 crc kubenswrapper[4782]: W0202 10:58:12.109615 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b9ad9f_f95d_4839_9531_4f0f11ca86ff.slice/crio-faf5160538972d662629346cf24cce84d9c1ba214d63c00e9170592ec413eeae WatchSource:0}: Error finding container faf5160538972d662629346cf24cce84d9c1ba214d63c00e9170592ec413eeae: Status 404 returned error can't find the container with id faf5160538972d662629346cf24cce84d9c1ba214d63c00e9170592ec413eeae Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.404777 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:12 crc kubenswrapper[4782]: E0202 10:58:12.405378 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" containerName="cinder-db-sync" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.405393 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" containerName="cinder-db-sync" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.405563 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" containerName="cinder-db-sync" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.406452 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.411407 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.411770 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.414514 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.415758 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l47mf" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.415814 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515609 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515683 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515742 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515761 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515809 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.515823 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.547634 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620544 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620597 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620614 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620689 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.620715 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.621001 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.630916 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.638350 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.648395 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.657043 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.668699 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.670484 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.694691 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") pod \"cinder-scheduler-0\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.703193 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.725882 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.725939 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.725965 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.726298 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.726423 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.733270 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.832606 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.839512 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.840103 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.840354 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.840651 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.841862 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.844761 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.845337 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.850258 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.868339 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.870197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.874465 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") pod \"dnsmasq-dns-6d97fcdd8f-lp4zt\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.877259 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.893182 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942727 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942835 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942884 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942965 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.942989 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.943028 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:12 crc kubenswrapper[4782]: I0202 10:58:12.943055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.002842 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.024039 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" event={"ID":"52b9ad9f-f95d-4839-9531-4f0f11ca86ff","Type":"ContainerStarted","Data":"f288acbe921357fa3278c93e72e64296c1366f45e89f011ffdacc77b05974ad4"} Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.024087 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" event={"ID":"52b9ad9f-f95d-4839-9531-4f0f11ca86ff","Type":"ContainerStarted","Data":"7b49f80f062f7887bfc3b6ca104586f8651572648e02a453ea0aa9c52e2f1126"} Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.024100 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" event={"ID":"52b9ad9f-f95d-4839-9531-4f0f11ca86ff","Type":"ContainerStarted","Data":"faf5160538972d662629346cf24cce84d9c1ba214d63c00e9170592ec413eeae"} Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058123 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058631 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058785 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.058865 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.059019 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.059051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.063279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.064976 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.069087 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.071187 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.076605 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.083632 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.105144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") pod \"cinder-api-0\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.216061 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.488782 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:13 crc kubenswrapper[4782]: I0202 10:58:13.659704 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.031053 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerStarted","Data":"8c21bb8034d9faf7eb546bc39d481d9fb7112330466d208d373ff1d4cfc5503c"} Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.032580 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="dnsmasq-dns" containerID="cri-o://69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001" gracePeriod=10 Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.032942 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerStarted","Data":"5a7279bfcc6fbedda5247577044693b3e9c719e1402b5cf6df02c6d805661e2a"} Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.032975 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.032987 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.081251 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" podStartSLOduration=3.081228565 podStartE2EDuration="3.081228565s" podCreationTimestamp="2026-02-02 10:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:14.054485729 +0000 UTC m=+1173.938678465" watchObservedRunningTime="2026-02-02 10:58:14.081228565 +0000 UTC m=+1173.965421281" Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.097721 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:14 crc kubenswrapper[4782]: I0202 10:58:14.869915 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.050473 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerStarted","Data":"ea5226b97ad240049c9f39d23e381c57be0f9553f067d978ea59153b623b3d90"} Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.126959 4782 generic.go:334] "Generic (PLEG): container finished" podID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerID="235f00f5818c6de4755dcefb6a2d4359499a9277fdd4c9df3ba1b496dc87e676" exitCode=0 Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.127048 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerDied","Data":"235f00f5818c6de4755dcefb6a2d4359499a9277fdd4c9df3ba1b496dc87e676"} Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.196828 4782 generic.go:334] "Generic (PLEG): container finished" podID="b226dd37-b5b5-4514-9495-944db6e760ed" containerID="69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001" exitCode=0 Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.197885 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerDied","Data":"69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001"} Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.272523 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335342 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335394 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335427 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335484 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.335534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") pod \"b226dd37-b5b5-4514-9495-944db6e760ed\" (UID: \"b226dd37-b5b5-4514-9495-944db6e760ed\") " Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.360165 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn" (OuterVolumeSpecName: "kube-api-access-x24sn") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "kube-api-access-x24sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.437071 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x24sn\" (UniqueName: \"kubernetes.io/projected/b226dd37-b5b5-4514-9495-944db6e760ed-kube-api-access-x24sn\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.467439 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.480202 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.483093 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.517415 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config" (OuterVolumeSpecName: "config") pod "b226dd37-b5b5-4514-9495-944db6e760ed" (UID: "b226dd37-b5b5-4514-9495-944db6e760ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.540855 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.540895 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.540907 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:15 crc kubenswrapper[4782]: I0202 10:58:15.540916 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b226dd37-b5b5-4514-9495-944db6e760ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.221221 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerStarted","Data":"e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de"} Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.221515 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.232338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" event={"ID":"b226dd37-b5b5-4514-9495-944db6e760ed","Type":"ContainerDied","Data":"42049f171b77a573283352f44491fb48841eb34d9cc4039ea25a8c1b150ccf44"} Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.232660 4782 scope.go:117] "RemoveContainer" containerID="69e2e8d1b7e676b9d7aaaa32e112116126f5e856b2700909a615b248357c5001" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.232774 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-q57sq" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.258694 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerStarted","Data":"d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97"} Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.262358 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerStarted","Data":"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9"} Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.282997 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" podStartSLOduration=4.28297369 podStartE2EDuration="4.28297369s" podCreationTimestamp="2026-02-02 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:16.2460163 +0000 UTC m=+1176.130209016" watchObservedRunningTime="2026-02-02 10:58:16.28297369 +0000 UTC m=+1176.167166406" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.283700 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.297689 4782 scope.go:117] "RemoveContainer" containerID="fd4e9ce89e9962fa17cc57f028ec76e94e318eabc07d088e7fecd0989f0c912c" Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.303356 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-q57sq"] Feb 02 10:58:16 crc kubenswrapper[4782]: I0202 10:58:16.838249 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" path="/var/lib/kubelet/pods/b226dd37-b5b5-4514-9495-944db6e760ed/volumes" Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.272773 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerStarted","Data":"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd"} Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.272857 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.272873 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api-log" containerID="cri-o://0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" gracePeriod=30 Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.272914 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api" containerID="cri-o://810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" gracePeriod=30 Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.282221 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerStarted","Data":"1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a"} Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.304429 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.304411991 podStartE2EDuration="5.304411991s" podCreationTimestamp="2026-02-02 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:17.298583943 +0000 UTC m=+1177.182776659" watchObservedRunningTime="2026-02-02 10:58:17.304411991 +0000 UTC m=+1177.188604707" Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.335896 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.276444872 podStartE2EDuration="5.335875132s" podCreationTimestamp="2026-02-02 10:58:12 +0000 UTC" firstStartedPulling="2026-02-02 10:58:13.520501481 +0000 UTC m=+1173.404694197" lastFinishedPulling="2026-02-02 10:58:14.579931741 +0000 UTC m=+1174.464124457" observedRunningTime="2026-02-02 10:58:17.327634106 +0000 UTC m=+1177.211826832" watchObservedRunningTime="2026-02-02 10:58:17.335875132 +0000 UTC m=+1177.220067848" Feb 02 10:58:17 crc kubenswrapper[4782]: E0202 10:58:17.525774 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43699695_b676_4b62_8714_c01390804d91.slice/crio-conmon-0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43699695_b676_4b62_8714_c01390804d91.slice/crio-0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9.scope\": RecentStats: unable to find data in memory cache]" Feb 02 10:58:17 crc kubenswrapper[4782]: I0202 10:58:17.734441 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.220081 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.291301 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294271 4782 generic.go:334] "Generic (PLEG): container finished" podID="43699695-b676-4b62-8714-c01390804d91" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" exitCode=0 Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294309 4782 generic.go:334] "Generic (PLEG): container finished" podID="43699695-b676-4b62-8714-c01390804d91" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" exitCode=143 Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294495 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerDied","Data":"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd"} Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294529 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerDied","Data":"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9"} Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294542 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"43699695-b676-4b62-8714-c01390804d91","Type":"ContainerDied","Data":"ea5226b97ad240049c9f39d23e381c57be0f9553f067d978ea59153b623b3d90"} Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294557 4782 scope.go:117] "RemoveContainer" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.294793 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332364 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332416 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332588 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332620 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332658 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332703 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.332769 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") pod \"43699695-b676-4b62-8714-c01390804d91\" (UID: \"43699695-b676-4b62-8714-c01390804d91\") " Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.335421 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.336854 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43699695-b676-4b62-8714-c01390804d91-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.352802 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.355002 4782 scope.go:117] "RemoveContainer" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.356879 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs" (OuterVolumeSpecName: "logs") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.383246 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd" (OuterVolumeSpecName: "kube-api-access-qzgjd") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "kube-api-access-qzgjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.408805 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts" (OuterVolumeSpecName: "scripts") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.442921 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzgjd\" (UniqueName: \"kubernetes.io/projected/43699695-b676-4b62-8714-c01390804d91-kube-api-access-qzgjd\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.442950 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.442960 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43699695-b676-4b62-8714-c01390804d91-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.442970 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.451134 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.484798 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data" (OuterVolumeSpecName: "config-data") pod "43699695-b676-4b62-8714-c01390804d91" (UID: "43699695-b676-4b62-8714-c01390804d91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.544615 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.544681 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43699695-b676-4b62-8714-c01390804d91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.554814 4782 scope.go:117] "RemoveContainer" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.555346 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": container with ID starting with 810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd not found: ID does not exist" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.555380 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd"} err="failed to get container status \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": rpc error: code = NotFound desc = could not find container \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": container with ID starting with 810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd not found: ID does not exist" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.555406 4782 scope.go:117] "RemoveContainer" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.555814 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": container with ID starting with 0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9 not found: ID does not exist" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.555839 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9"} err="failed to get container status \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": rpc error: code = NotFound desc = could not find container \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": container with ID starting with 0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9 not found: ID does not exist" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.555856 4782 scope.go:117] "RemoveContainer" containerID="810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.556155 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd"} err="failed to get container status \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": rpc error: code = NotFound desc = could not find container \"810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd\": container with ID starting with 810ae877f482ba2258408f26cb86d8ea022c57bbe6f027a6feffbbf2e63254bd not found: ID does not exist" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.556169 4782 scope.go:117] "RemoveContainer" containerID="0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.556386 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9"} err="failed to get container status \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": rpc error: code = NotFound desc = could not find container \"0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9\": container with ID starting with 0f4c1572be12fa20100a7a05aac03b1f6cf4498487c7606c75d95169f2af98e9 not found: ID does not exist" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.572964 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.573403 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-644b87c8cc-7cfbr" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-api" containerID="cri-o://786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53" gracePeriod=30 Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.574024 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-644b87c8cc-7cfbr" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" containerID="cri-o://f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c" gracePeriod=30 Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.590032 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-644b87c8cc-7cfbr" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.141:9696/\": read tcp 10.217.0.2:38578->10.217.0.141:9696: read: connection reset by peer" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651265 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bdf8f4745-82ddm"] Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.651630 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="init" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651679 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="init" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.651716 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api-log" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651722 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api-log" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.651728 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651734 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api" Feb 02 10:58:18 crc kubenswrapper[4782]: E0202 10:58:18.651741 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="dnsmasq-dns" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651749 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="dnsmasq-dns" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651900 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api-log" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651913 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="43699695-b676-4b62-8714-c01390804d91" containerName="cinder-api" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.651930 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b226dd37-b5b5-4514-9495-944db6e760ed" containerName="dnsmasq-dns" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.656313 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.658580 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.679766 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.681941 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdf8f4745-82ddm"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.708921 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.710937 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.725742 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.725991 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.726098 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.729812 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750160 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-combined-ca-bundle\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750225 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-public-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750265 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-ovndb-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750290 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-httpd-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750340 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp52w\" (UniqueName: \"kubernetes.io/projected/ab6192fa-a576-411f-8083-2d6bfa57c39f-kube-api-access-zp52w\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.750390 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-internal-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.843464 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43699695-b676-4b62-8714-c01390804d91" path="/var/lib/kubelet/pods/43699695-b676-4b62-8714-c01390804d91/volumes" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857572 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857632 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857676 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-scripts\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857701 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7zq\" (UniqueName: \"kubernetes.io/projected/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-kube-api-access-pp7zq\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857770 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857812 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857860 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-combined-ca-bundle\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857902 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-public-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857968 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-logs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.857999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-ovndb-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858038 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858106 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-httpd-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858147 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp52w\" (UniqueName: \"kubernetes.io/projected/ab6192fa-a576-411f-8083-2d6bfa57c39f-kube-api-access-zp52w\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858167 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858185 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.858292 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-internal-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.870547 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-ovndb-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.870669 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-internal-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.870564 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-combined-ca-bundle\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.875725 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-public-tls-certs\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.876490 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.883263 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ab6192fa-a576-411f-8083-2d6bfa57c39f-httpd-config\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.894177 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp52w\" (UniqueName: \"kubernetes.io/projected/ab6192fa-a576-411f-8083-2d6bfa57c39f-kube-api-access-zp52w\") pod \"neutron-5bdf8f4745-82ddm\" (UID: \"ab6192fa-a576-411f-8083-2d6bfa57c39f\") " pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.964152 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.971948 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972195 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-scripts\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972281 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7zq\" (UniqueName: \"kubernetes.io/projected/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-kube-api-access-pp7zq\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972444 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972537 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972787 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-logs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.972966 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.973043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.975936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-logs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.968385 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.985158 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.985882 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.990311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.994041 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-scripts\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:18 crc kubenswrapper[4782]: I0202 10:58:18.994886 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-config-data\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.005332 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7zq\" (UniqueName: \"kubernetes.io/projected/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-kube-api-access-pp7zq\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.005340 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d71c3db-1389-4568-bb5e-c87dc6a60ddd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d71c3db-1389-4568-bb5e-c87dc6a60ddd\") " pod="openstack/cinder-api-0" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.043806 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.070113 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.430427 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.760705 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bdf8f4745-82ddm"] Feb 02 10:58:19 crc kubenswrapper[4782]: I0202 10:58:19.774338 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.360249 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf8f4745-82ddm" event={"ID":"ab6192fa-a576-411f-8083-2d6bfa57c39f","Type":"ContainerStarted","Data":"432b7d36382afa67bf37cdf1d06d416b975f874f74041da2a3bf4687b71fad8c"} Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.360691 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf8f4745-82ddm" event={"ID":"ab6192fa-a576-411f-8083-2d6bfa57c39f","Type":"ContainerStarted","Data":"0b2df805536c1ecf0b92b90cd45b184370e0405e5aefc8742b965dfc34956403"} Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.366398 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d71c3db-1389-4568-bb5e-c87dc6a60ddd","Type":"ContainerStarted","Data":"41916449d799b7018510e13a599c8b6f77fe467308b23a2242ecfcac2a84e8a1"} Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.381151 4782 generic.go:334] "Generic (PLEG): container finished" podID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerID="f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c" exitCode=0 Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.381194 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerDied","Data":"f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c"} Feb 02 10:58:20 crc kubenswrapper[4782]: I0202 10:58:20.795133 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-644b87c8cc-7cfbr" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.141:9696/\": dial tcp 10.217.0.141:9696: connect: connection refused" Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.397048 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d71c3db-1389-4568-bb5e-c87dc6a60ddd","Type":"ContainerStarted","Data":"13ab1e74453151aeec36a4544b2ac740ed5f600cb10df6c778f71bab286e1215"} Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.404141 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bdf8f4745-82ddm" event={"ID":"ab6192fa-a576-411f-8083-2d6bfa57c39f","Type":"ContainerStarted","Data":"d13bff30fb9899b308dc138a8349977cbbd535521034249f1c4d00bfd79578fd"} Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.405160 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.431885 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bdf8f4745-82ddm" podStartSLOduration=3.431863499 podStartE2EDuration="3.431863499s" podCreationTimestamp="2026-02-02 10:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:21.430620553 +0000 UTC m=+1181.314813269" watchObservedRunningTime="2026-02-02 10:58:21.431863499 +0000 UTC m=+1181.316056215" Feb 02 10:58:21 crc kubenswrapper[4782]: I0202 10:58:21.663736 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:22 crc kubenswrapper[4782]: I0202 10:58:22.031616 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:22 crc kubenswrapper[4782]: I0202 10:58:22.479585 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d71c3db-1389-4568-bb5e-c87dc6a60ddd","Type":"ContainerStarted","Data":"869e66423515b44b6ac5aeb661ead8d7d8996a659be8e6c4dd033373573dbf55"} Feb 02 10:58:22 crc kubenswrapper[4782]: I0202 10:58:22.480382 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.004876 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.031469 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.031447412 podStartE2EDuration="5.031447412s" podCreationTimestamp="2026-02-02 10:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:22.524171921 +0000 UTC m=+1182.408364657" watchObservedRunningTime="2026-02-02 10:58:23.031447412 +0000 UTC m=+1182.915640128" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.078761 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.078974 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" containerID="cri-o://57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143" gracePeriod=10 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.117085 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.144055 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.305051 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.539504 4782 generic.go:334] "Generic (PLEG): container finished" podID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerID="57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143" exitCode=0 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.539564 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerDied","Data":"57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143"} Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.541423 4782 generic.go:334] "Generic (PLEG): container finished" podID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerID="786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53" exitCode=0 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.541599 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="cinder-scheduler" containerID="cri-o://d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97" gracePeriod=30 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.549101 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerDied","Data":"786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53"} Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.550618 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="probe" containerID="cri-o://1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a" gracePeriod=30 Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.643043 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.661356 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.661404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.661517 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.661600 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.662748 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.662796 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.662875 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") pod \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\" (UID: \"c1b76222-36df-45a6-ac9f-edb412c8a2ad\") " Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.674431 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m" (OuterVolumeSpecName: "kube-api-access-7d72m") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "kube-api-access-7d72m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.683979 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.765118 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d72m\" (UniqueName: \"kubernetes.io/projected/c1b76222-36df-45a6-ac9f-edb412c8a2ad-kube-api-access-7d72m\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.765358 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.793019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config" (OuterVolumeSpecName: "config") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.797175 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.803216 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.806932 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.816838 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c1b76222-36df-45a6-ac9f-edb412c8a2ad" (UID: "c1b76222-36df-45a6-ac9f-edb412c8a2ad"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872300 4782 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872377 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872392 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872404 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.872417 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1b76222-36df-45a6-ac9f-edb412c8a2ad-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:23 crc kubenswrapper[4782]: I0202 10:58:23.996031 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076470 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076572 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076707 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.076807 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") pod \"7f6a8ebb-a211-4505-b934-3048a67b2f47\" (UID: \"7f6a8ebb-a211-4505-b934-3048a67b2f47\") " Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.098701 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp" (OuterVolumeSpecName: "kube-api-access-rmddp") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "kube-api-access-rmddp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.185834 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmddp\" (UniqueName: \"kubernetes.io/projected/7f6a8ebb-a211-4505-b934-3048a67b2f47-kube-api-access-rmddp\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.247368 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.266053 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config" (OuterVolumeSpecName: "config") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.277827 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.288626 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.288777 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.288788 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.299366 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f6a8ebb-a211-4505-b934-3048a67b2f47" (UID: "7f6a8ebb-a211-4505-b934-3048a67b2f47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.390953 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f6a8ebb-a211-4505-b934-3048a67b2f47-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.519034 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.550230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" event={"ID":"7f6a8ebb-a211-4505-b934-3048a67b2f47","Type":"ContainerDied","Data":"d2d57fff99a40c3d971a276c962ad40364a2dc18610c2d3bd9d74bd06dd02f62"} Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.550275 4782 scope.go:117] "RemoveContainer" containerID="57ad28ba710e6e1893b164e98b4a9426687f44b060fefafca3e1d4c41edc3143" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.550376 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-pbdmr" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.553602 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644b87c8cc-7cfbr" event={"ID":"c1b76222-36df-45a6-ac9f-edb412c8a2ad","Type":"ContainerDied","Data":"8889fd515dbda34f10b34a65e145848adfe2d17e55c2e3acb24297eefee67df3"} Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.553691 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644b87c8cc-7cfbr" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.584568 4782 scope.go:117] "RemoveContainer" containerID="307a5c49cf6e90bbab2ae7599314e3e57ac09662374b82e043747eec646d2bdd" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.620664 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.635283 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-pbdmr"] Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.652261 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.666747 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-644b87c8cc-7cfbr"] Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.680482 4782 scope.go:117] "RemoveContainer" containerID="f08c19bb6d845c3aebb5e1bac413c2ca39e5ea569262e16c1b854d1f98b2646c" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.715514 4782 scope.go:117] "RemoveContainer" containerID="786fa8a7409c75cc654771de8b93722936d3a222c6348896fdf1e92677c32d53" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.830622 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" path="/var/lib/kubelet/pods/7f6a8ebb-a211-4505-b934-3048a67b2f47/volumes" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.831520 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" path="/var/lib/kubelet/pods/c1b76222-36df-45a6-ac9f-edb412c8a2ad/volumes" Feb 02 10:58:24 crc kubenswrapper[4782]: I0202 10:58:24.947790 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77c4d8f8d8-7qmjv" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.022020 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.022285 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b7797d578-tmg69" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" containerID="cri-o://985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573" gracePeriod=30 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.022550 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b7797d578-tmg69" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" containerID="cri-o://de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f" gracePeriod=30 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.382916 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.384735 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.578409 4782 generic.go:334] "Generic (PLEG): container finished" podID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerID="1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a" exitCode=0 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.578791 4782 generic.go:334] "Generic (PLEG): container finished" podID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerID="d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97" exitCode=0 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.578471 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerDied","Data":"1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a"} Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.578891 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerDied","Data":"d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97"} Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.583795 4782 generic.go:334] "Generic (PLEG): container finished" podID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerID="985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573" exitCode=143 Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.583835 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerDied","Data":"985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573"} Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.719599 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-555cfb6c68-sntkc"] Feb 02 10:58:25 crc kubenswrapper[4782]: E0202 10:58:25.720007 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-api" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720027 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-api" Feb 02 10:58:25 crc kubenswrapper[4782]: E0202 10:58:25.720060 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="init" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720067 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="init" Feb 02 10:58:25 crc kubenswrapper[4782]: E0202 10:58:25.720088 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720096 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" Feb 02 10:58:25 crc kubenswrapper[4782]: E0202 10:58:25.720109 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720115 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720297 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f6a8ebb-a211-4505-b934-3048a67b2f47" containerName="dnsmasq-dns" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720310 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-httpd" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.720324 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b76222-36df-45a6-ac9f-edb412c8a2ad" containerName="neutron-api" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.721353 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.741848 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-555cfb6c68-sntkc"] Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824572 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-scripts\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824625 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-internal-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824674 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-config-data\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824713 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9040c71d-579d-4f4e-99cf-bb76289b9aa3-logs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtv9\" (UniqueName: \"kubernetes.io/projected/9040c71d-579d-4f4e-99cf-bb76289b9aa3-kube-api-access-hjtv9\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824783 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-combined-ca-bundle\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.824803 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-public-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.872895 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtv9\" (UniqueName: \"kubernetes.io/projected/9040c71d-579d-4f4e-99cf-bb76289b9aa3-kube-api-access-hjtv9\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926158 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-combined-ca-bundle\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-public-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-scripts\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926305 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-internal-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926334 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-config-data\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.926382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9040c71d-579d-4f4e-99cf-bb76289b9aa3-logs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.931058 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9040c71d-579d-4f4e-99cf-bb76289b9aa3-logs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.941582 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-public-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.946197 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-scripts\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.967522 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-config-data\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.967534 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-combined-ca-bundle\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.967947 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9040c71d-579d-4f4e-99cf-bb76289b9aa3-internal-tls-certs\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:25 crc kubenswrapper[4782]: I0202 10:58:25.981344 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtv9\" (UniqueName: \"kubernetes.io/projected/9040c71d-579d-4f4e-99cf-bb76289b9aa3-kube-api-access-hjtv9\") pod \"placement-555cfb6c68-sntkc\" (UID: \"9040c71d-579d-4f4e-99cf-bb76289b9aa3\") " pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028392 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028459 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028535 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028605 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028700 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.028721 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") pod \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\" (UID: \"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f\") " Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.036130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.038779 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts" (OuterVolumeSpecName: "scripts") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.041030 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb" (OuterVolumeSpecName: "kube-api-access-htwrb") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "kube-api-access-htwrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.049186 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.063152 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.126122 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134596 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134629 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134650 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htwrb\" (UniqueName: \"kubernetes.io/projected/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-kube-api-access-htwrb\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134659 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.134668 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.605770 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85f31fbf-8fcd-4364-a0a0-f489b3cdca7f","Type":"ContainerDied","Data":"5a7279bfcc6fbedda5247577044693b3e9c719e1402b5cf6df02c6d805661e2a"} Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.605809 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.605839 4782 scope.go:117] "RemoveContainer" containerID="1005c6933c36048ea8d7266b59a453bf785f8fbd2b74ce215f2b2456d907b65a" Feb 02 10:58:26 crc kubenswrapper[4782]: I0202 10:58:26.628761 4782 scope.go:117] "RemoveContainer" containerID="d8a0b8246e525961c77b2290fcb94427f50f92ca53fb13fcaaac7f8ae8d09e97" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.024172 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data" (OuterVolumeSpecName: "config-data") pod "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" (UID: "85f31fbf-8fcd-4364-a0a0-f489b3cdca7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.050661 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.252033 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.266686 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.331779 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:27 crc kubenswrapper[4782]: E0202 10:58:27.332139 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="probe" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.332150 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="probe" Feb 02 10:58:27 crc kubenswrapper[4782]: E0202 10:58:27.332166 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="cinder-scheduler" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.332172 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="cinder-scheduler" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.332315 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="probe" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.332329 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" containerName="cinder-scheduler" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.333226 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.340394 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.342536 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.356902 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.356950 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.357004 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.357029 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llrg7\" (UniqueName: \"kubernetes.io/projected/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-kube-api-access-llrg7\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.357055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.357119 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458176 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458228 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458259 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458303 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458340 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llrg7\" (UniqueName: \"kubernetes.io/projected/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-kube-api-access-llrg7\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.458373 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.460160 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.467080 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.471386 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.471839 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.477386 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.482309 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llrg7\" (UniqueName: \"kubernetes.io/projected/c35672ba-9e13-4e6d-945a-74b4cf3ee0ff-kube-api-access-llrg7\") pod \"cinder-scheduler-0\" (UID: \"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff\") " pod="openstack/cinder-scheduler-0" Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.532967 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-555cfb6c68-sntkc"] Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.632279 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-555cfb6c68-sntkc" event={"ID":"9040c71d-579d-4f4e-99cf-bb76289b9aa3","Type":"ContainerStarted","Data":"a0be2b1a93ee53e9d262edc76469a71cac4773ff8420a23dfd775e046a7d0049"} Feb 02 10:58:27 crc kubenswrapper[4782]: I0202 10:58:27.672922 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.165955 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.429936 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b7797d578-tmg69" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": dial tcp 10.217.0.147:9311: connect: connection refused" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.430190 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b7797d578-tmg69" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.147:9311/healthcheck\": dial tcp 10.217.0.147:9311: connect: connection refused" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.652335 4782 generic.go:334] "Generic (PLEG): container finished" podID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerID="de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f" exitCode=0 Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.652440 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerDied","Data":"de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f"} Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.693041 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-555cfb6c68-sntkc" event={"ID":"9040c71d-579d-4f4e-99cf-bb76289b9aa3","Type":"ContainerStarted","Data":"35761f96fdfa68040ac16f88eb7bb841866041d408860331272083c2682a2947"} Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.693118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-555cfb6c68-sntkc" event={"ID":"9040c71d-579d-4f4e-99cf-bb76289b9aa3","Type":"ContainerStarted","Data":"25edc0c259aa7a2af7d10f2cfb199e9dba5d3f9e464ed5556e8c20ca05526a89"} Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.696139 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.696205 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.722429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff","Type":"ContainerStarted","Data":"31cec05d225286ce8a6a15663554bcab5d2740787f3c678706d66aced7845935"} Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.726486 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-555cfb6c68-sntkc" podStartSLOduration=3.7264637350000003 podStartE2EDuration="3.726463735s" podCreationTimestamp="2026-02-02 10:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:28.713946686 +0000 UTC m=+1188.598139392" watchObservedRunningTime="2026-02-02 10:58:28.726463735 +0000 UTC m=+1188.610656451" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.854737 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f31fbf-8fcd-4364-a0a0-f489b3cdca7f" path="/var/lib/kubelet/pods/85f31fbf-8fcd-4364-a0a0-f489b3cdca7f/volumes" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.919160 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.995670 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.995911 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.995984 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.996038 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.996095 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") pod \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\" (UID: \"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0\") " Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.996185 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs" (OuterVolumeSpecName: "logs") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:28 crc kubenswrapper[4782]: I0202 10:58:28.996476 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.016928 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.023406 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp" (OuterVolumeSpecName: "kube-api-access-xkkzp") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "kube-api-access-xkkzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.038910 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.068826 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data" (OuterVolumeSpecName: "config-data") pod "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" (UID: "faa1074a-6af5-41a7-bfe0-0dc771e9dbf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.099631 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.099685 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.099700 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkkzp\" (UniqueName: \"kubernetes.io/projected/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-kube-api-access-xkkzp\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.099709 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.734653 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff","Type":"ContainerStarted","Data":"b2f6e262f9c7877d00519b4042ce035463f728ae3abc96e71e36bb44e8a6796e"} Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.734694 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c35672ba-9e13-4e6d-945a-74b4cf3ee0ff","Type":"ContainerStarted","Data":"29b316a5f1a17523790f899734faffe6f0ffdfd126e485f5490c426ba69f458f"} Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.739591 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b7797d578-tmg69" event={"ID":"faa1074a-6af5-41a7-bfe0-0dc771e9dbf0","Type":"ContainerDied","Data":"486044541d5c070266883b9d8c5a598bb41438b6bc2f68afedb4cd643ff3c9ee"} Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.739660 4782 scope.go:117] "RemoveContainer" containerID="de993ad71b2389fa8f527a4a099b49faf994e7a1a4f1e91b9ec465c30000ae3f" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.739859 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b7797d578-tmg69" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.761608 4782 scope.go:117] "RemoveContainer" containerID="985a045376b8765a9f6e8767fddd038e288b28f20d40fab7634f3c8194dfd573" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.771937 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.771919654 podStartE2EDuration="2.771919654s" podCreationTimestamp="2026-02-02 10:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:58:29.769634029 +0000 UTC m=+1189.653826735" watchObservedRunningTime="2026-02-02 10:58:29.771919654 +0000 UTC m=+1189.656112370" Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.793164 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:29 crc kubenswrapper[4782]: I0202 10:58:29.799190 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b7797d578-tmg69"] Feb 02 10:58:30 crc kubenswrapper[4782]: I0202 10:58:30.342851 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79d66b847-whsks" Feb 02 10:58:30 crc kubenswrapper[4782]: I0202 10:58:30.832385 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" path="/var/lib/kubelet/pods/faa1074a-6af5-41a7-bfe0-0dc771e9dbf0/volumes" Feb 02 10:58:31 crc kubenswrapper[4782]: I0202 10:58:31.434712 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 10:58:32 crc kubenswrapper[4782]: I0202 10:58:32.673814 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.087195 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 10:58:35 crc kubenswrapper[4782]: E0202 10:58:35.087920 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.087941 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" Feb 02 10:58:35 crc kubenswrapper[4782]: E0202 10:58:35.087958 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.087967 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.088173 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api-log" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.088187 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa1074a-6af5-41a7-bfe0-0dc771e9dbf0" containerName="barbican-api" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.088874 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.097147 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.097574 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.098215 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.109788 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-92ndr" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.204819 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.204907 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzhcr\" (UniqueName: \"kubernetes.io/projected/7ed19b68-33c0-45b1-acbc-b6e9def4e565-kube-api-access-rzhcr\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.204938 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.205127 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.306303 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.306419 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.306462 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzhcr\" (UniqueName: \"kubernetes.io/projected/7ed19b68-33c0-45b1-acbc-b6e9def4e565-kube-api-access-rzhcr\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.306484 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.307556 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.312391 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.327813 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed19b68-33c0-45b1-acbc-b6e9def4e565-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.332671 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzhcr\" (UniqueName: \"kubernetes.io/projected/7ed19b68-33c0-45b1-acbc-b6e9def4e565-kube-api-access-rzhcr\") pod \"openstackclient\" (UID: \"7ed19b68-33c0-45b1-acbc-b6e9def4e565\") " pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.441999 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 10:58:35 crc kubenswrapper[4782]: I0202 10:58:35.977070 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 10:58:36 crc kubenswrapper[4782]: W0202 10:58:36.004230 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ed19b68_33c0_45b1_acbc_b6e9def4e565.slice/crio-93b50e5f515807e4b2907f79c15349731001332d43c756b0eb3e503b4a50a135 WatchSource:0}: Error finding container 93b50e5f515807e4b2907f79c15349731001332d43c756b0eb3e503b4a50a135: Status 404 returned error can't find the container with id 93b50e5f515807e4b2907f79c15349731001332d43c756b0eb3e503b4a50a135 Feb 02 10:58:36 crc kubenswrapper[4782]: I0202 10:58:36.817950 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7ed19b68-33c0-45b1-acbc-b6e9def4e565","Type":"ContainerStarted","Data":"93b50e5f515807e4b2907f79c15349731001332d43c756b0eb3e503b4a50a135"} Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.381121 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.549505 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.549595 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.549927 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550008 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550061 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550220 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550251 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") pod \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\" (UID: \"8eb720ee-de8d-42e4-b189-aa3d58478ab9\") " Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550444 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550733 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550978 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.550990 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eb720ee-de8d-42e4-b189-aa3d58478ab9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.565853 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht" (OuterVolumeSpecName: "kube-api-access-g5tht") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "kube-api-access-g5tht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.586181 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.591924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts" (OuterVolumeSpecName: "scripts") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.652594 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5tht\" (UniqueName: \"kubernetes.io/projected/8eb720ee-de8d-42e4-b189-aa3d58478ab9-kube-api-access-g5tht\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.652654 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.652668 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.672115 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.722500 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data" (OuterVolumeSpecName: "config-data") pod "8eb720ee-de8d-42e4-b189-aa3d58478ab9" (UID: "8eb720ee-de8d-42e4-b189-aa3d58478ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.754243 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.754278 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eb720ee-de8d-42e4-b189-aa3d58478ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869345 4782 generic.go:334] "Generic (PLEG): container finished" podID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerID="140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" exitCode=137 Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869632 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96"} Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869690 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eb720ee-de8d-42e4-b189-aa3d58478ab9","Type":"ContainerDied","Data":"995e3d21dc8fc39f728f7ea640cf5b2814a34afafd0aba1f79572a1482443e61"} Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869741 4782 scope.go:117] "RemoveContainer" containerID="140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.869924 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.915618 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.929393 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.933313 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.934056 4782 scope.go:117] "RemoveContainer" containerID="204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.949727 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:37 crc kubenswrapper[4782]: E0202 10:58:37.950193 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-central-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950213 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-central-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: E0202 10:58:37.950227 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="sg-core" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950233 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="sg-core" Feb 02 10:58:37 crc kubenswrapper[4782]: E0202 10:58:37.950243 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-notification-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950250 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-notification-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: E0202 10:58:37.950259 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950264 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950443 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-central-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950460 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="sg-core" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950471 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="ceilometer-notification-agent" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.950484 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" containerName="proxy-httpd" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.952143 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.955068 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.955260 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.967136 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.967178 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.968510 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.968562 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.968672 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.968716 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.969116 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:37 crc kubenswrapper[4782]: I0202 10:58:37.983537 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.027832 4782 scope.go:117] "RemoveContainer" containerID="64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070404 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070430 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070480 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070510 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.070526 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.071287 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.071600 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.075094 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.080605 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.083852 4782 scope.go:117] "RemoveContainer" containerID="cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.083908 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.087238 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.121609 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") pod \"ceilometer-0\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.180140 4782 scope.go:117] "RemoveContainer" containerID="140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.180584 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb720ee_de8d_42e4_b189_aa3d58478ab9.slice/crio-995e3d21dc8fc39f728f7ea640cf5b2814a34afafd0aba1f79572a1482443e61\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb720ee_de8d_42e4_b189_aa3d58478ab9.slice\": RecentStats: unable to find data in memory cache]" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.183465 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96\": container with ID starting with 140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96 not found: ID does not exist" containerID="140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.183514 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96"} err="failed to get container status \"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96\": rpc error: code = NotFound desc = could not find container \"140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96\": container with ID starting with 140a927fa6b2c1e23687d54be409e1628753f55ba914f147e7bf8b40aeda5b96 not found: ID does not exist" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.183545 4782 scope.go:117] "RemoveContainer" containerID="204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.206108 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f\": container with ID starting with 204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f not found: ID does not exist" containerID="204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.206175 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f"} err="failed to get container status \"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f\": rpc error: code = NotFound desc = could not find container \"204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f\": container with ID starting with 204f6396c819d71a327699ccfeca1a155dda1d800805c4fde5bc58682ccb702f not found: ID does not exist" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.206202 4782 scope.go:117] "RemoveContainer" containerID="64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.232947 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9\": container with ID starting with 64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9 not found: ID does not exist" containerID="64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.233009 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9"} err="failed to get container status \"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9\": rpc error: code = NotFound desc = could not find container \"64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9\": container with ID starting with 64ab0cbbeed3f64299d16361c7ebfd14f8590d54efef7b63fa8c440f0b029ef9 not found: ID does not exist" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.233037 4782 scope.go:117] "RemoveContainer" containerID="cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" Feb 02 10:58:38 crc kubenswrapper[4782]: E0202 10:58:38.233385 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08\": container with ID starting with cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08 not found: ID does not exist" containerID="cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.233404 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08"} err="failed to get container status \"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08\": rpc error: code = NotFound desc = could not find container \"cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08\": container with ID starting with cff050bea02cab179349ed9f4910ee4f8ce16895bfa3f74bcd1eb0342c469f08 not found: ID does not exist" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.328438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.830795 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb720ee-de8d-42e4-b189-aa3d58478ab9" path="/var/lib/kubelet/pods/8eb720ee-de8d-42e4-b189-aa3d58478ab9/volumes" Feb 02 10:58:38 crc kubenswrapper[4782]: I0202 10:58:38.867885 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:38 crc kubenswrapper[4782]: W0202 10:58:38.877818 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce5deca8_5a47_4769_9518_5cb398a7cf5c.slice/crio-4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83 WatchSource:0}: Error finding container 4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83: Status 404 returned error can't find the container with id 4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83 Feb 02 10:58:39 crc kubenswrapper[4782]: I0202 10:58:39.893254 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83"} Feb 02 10:58:40 crc kubenswrapper[4782]: I0202 10:58:40.907129 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101"} Feb 02 10:58:46 crc kubenswrapper[4782]: I0202 10:58:46.496213 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:58:47 crc kubenswrapper[4782]: I0202 10:58:47.974001 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7ed19b68-33c0-45b1-acbc-b6e9def4e565","Type":"ContainerStarted","Data":"881ba63078bc6f671dfef70e53653228b002023c4e2b67b4803da5f486d5d5c2"} Feb 02 10:58:47 crc kubenswrapper[4782]: I0202 10:58:47.977753 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6"} Feb 02 10:58:47 crc kubenswrapper[4782]: I0202 10:58:47.997945 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.949726002 podStartE2EDuration="12.997930209s" podCreationTimestamp="2026-02-02 10:58:35 +0000 UTC" firstStartedPulling="2026-02-02 10:58:36.009721756 +0000 UTC m=+1195.893914472" lastFinishedPulling="2026-02-02 10:58:47.057925963 +0000 UTC m=+1206.942118679" observedRunningTime="2026-02-02 10:58:47.995913382 +0000 UTC m=+1207.880106098" watchObservedRunningTime="2026-02-02 10:58:47.997930209 +0000 UTC m=+1207.882122925" Feb 02 10:58:48 crc kubenswrapper[4782]: I0202 10:58:48.987661 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003"} Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.062921 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bdf8f4745-82ddm" Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.148285 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.148567 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c4497f454-mphzd" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-api" containerID="cri-o://d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.149090 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c4497f454-mphzd" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-httpd" containerID="cri-o://0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.995936 4782 generic.go:334] "Generic (PLEG): container finished" podID="64a58e87-7403-40ee-804f-3ddd256a166a" containerID="0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda" exitCode=0 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.996001 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerDied","Data":"0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda"} Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999341 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerStarted","Data":"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1"} Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999575 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" containerID="cri-o://533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999605 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="sg-core" containerID="cri-o://04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999488 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-central-agent" containerID="cri-o://84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" gracePeriod=30 Feb 02 10:58:49 crc kubenswrapper[4782]: I0202 10:58:49.999581 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-notification-agent" containerID="cri-o://b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" gracePeriod=30 Feb 02 10:58:50 crc kubenswrapper[4782]: I0202 10:58:50.000161 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:58:50 crc kubenswrapper[4782]: I0202 10:58:50.039383 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.178028948 podStartE2EDuration="13.039359528s" podCreationTimestamp="2026-02-02 10:58:37 +0000 UTC" firstStartedPulling="2026-02-02 10:58:38.880443708 +0000 UTC m=+1198.764636424" lastFinishedPulling="2026-02-02 10:58:49.741774288 +0000 UTC m=+1209.625967004" observedRunningTime="2026-02-02 10:58:50.028219829 +0000 UTC m=+1209.912412545" watchObservedRunningTime="2026-02-02 10:58:50.039359528 +0000 UTC m=+1209.923552244" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014179 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerID="04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" exitCode=2 Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014812 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerID="b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" exitCode=0 Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014877 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerID="84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" exitCode=0 Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003"} Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.014997 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6"} Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.015057 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101"} Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.664884 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.666080 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.682315 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.759691 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.761210 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.782246 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.790720 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.792049 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.806829 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.806916 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.807179 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.842481 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.880043 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.883099 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.890752 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909077 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909142 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909174 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909203 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.909391 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.911521 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.937703 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") pod \"nova-api-db-create-j8z8n\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.979930 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.981300 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.983928 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.986308 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:51 crc kubenswrapper[4782]: I0202 10:58:51.993650 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012139 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012231 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012285 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012354 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.012383 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.013335 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.013438 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.031801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") pod \"nova-api-627f-account-create-update-h6hdk\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.060160 4782 generic.go:334] "Generic (PLEG): container finished" podID="64a58e87-7403-40ee-804f-3ddd256a166a" containerID="d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe" exitCode=0 Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.060228 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerDied","Data":"d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe"} Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.064429 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") pod \"nova-cell0-db-create-964hl\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.082010 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.110967 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.114494 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.114767 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.114917 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.115322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.116713 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.138631 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") pod \"nova-cell1-db-create-jnw6j\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.172143 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.173258 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.176560 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.208448 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.234216 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.234279 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.243602 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.254725 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.280291 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") pod \"nova-cell0-9147-account-create-update-qcs9t\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.338218 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.338301 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.443469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.443559 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.444610 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.464844 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") pod \"nova-cell1-3e7e-account-create-update-n4kct\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.566576 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.574995 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.599441 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.647881 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.647955 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.648020 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.648142 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.648187 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") pod \"64a58e87-7403-40ee-804f-3ddd256a166a\" (UID: \"64a58e87-7403-40ee-804f-3ddd256a166a\") " Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.658165 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh" (OuterVolumeSpecName: "kube-api-access-nj6hh") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "kube-api-access-nj6hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.664270 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.714726 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config" (OuterVolumeSpecName: "config") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.734524 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.753529 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "64a58e87-7403-40ee-804f-3ddd256a166a" (UID: "64a58e87-7403-40ee-804f-3ddd256a166a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759103 4782 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759157 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759168 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759177 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a58e87-7403-40ee-804f-3ddd256a166a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.759186 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6hh\" (UniqueName: \"kubernetes.io/projected/64a58e87-7403-40ee-804f-3ddd256a166a-kube-api-access-nj6hh\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.791597 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 10:58:52 crc kubenswrapper[4782]: W0202 10:58:52.815946 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b55df6c_8971_415a_a934_0ec48a149b81.slice/crio-8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31 WatchSource:0}: Error finding container 8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31: Status 404 returned error can't find the container with id 8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31 Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.897476 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.917861 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.935828 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.957208 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:58:52 crc kubenswrapper[4782]: I0202 10:58:52.957243 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.082395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4497f454-mphzd" event={"ID":"64a58e87-7403-40ee-804f-3ddd256a166a","Type":"ContainerDied","Data":"320ca372c7bfc8f61ec1c100d757a206e5a44d87197850166ccabc894748efbf"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.082735 4782 scope.go:117] "RemoveContainer" containerID="0986fa20c95b296f1e3d0bb4136c8a84c1e716858b0306720e5061305da8efda" Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.082853 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4497f454-mphzd" Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.092983 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-964hl" event={"ID":"6a9a0fe2-4862-47e1-91d0-553d95235f39","Type":"ContainerStarted","Data":"3c54228f50eb200a684b5560277db9357aea7a42d6d5f73377c7a90387ccfc5b"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.096197 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-627f-account-create-update-h6hdk" event={"ID":"a9b75d8c-9435-483f-8e95-97690314cfb5","Type":"ContainerStarted","Data":"11b0ee4f356c606379fc2116b6e57a4beff8c051011c9cabd92f2c5770470a9b"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.114542 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jnw6j" event={"ID":"c5eccd3e-f895-4c2f-a1e5-c337a89d2439","Type":"ContainerStarted","Data":"2b78e6443e5dee55482e6b131b07bfb875d511a978bb0f266a573d8cf761933e"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.115966 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j8z8n" event={"ID":"8b55df6c-8971-415a-a934-0ec48a149b81","Type":"ContainerStarted","Data":"8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31"} Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.156798 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.170723 4782 scope.go:117] "RemoveContainer" containerID="d5712d2140fc769d95cc498077c5b51dd674fa5d0e2d88428c1fd7cfbc99eafe" Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.185405 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c4497f454-mphzd"] Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.224305 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 10:58:53 crc kubenswrapper[4782]: W0202 10:58:53.225341 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0abc6f3c_1f7d_4f48_8beb_205307984cdc.slice/crio-6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe WatchSource:0}: Error finding container 6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe: Status 404 returned error can't find the container with id 6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe Feb 02 10:58:53 crc kubenswrapper[4782]: I0202 10:58:53.246789 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.148920 4782 generic.go:334] "Generic (PLEG): container finished" podID="6a9a0fe2-4862-47e1-91d0-553d95235f39" containerID="ec0e250135ad643a0376384574da7a7800b3dd64125badf008a16dd100e20d1b" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.149117 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-964hl" event={"ID":"6a9a0fe2-4862-47e1-91d0-553d95235f39","Type":"ContainerDied","Data":"ec0e250135ad643a0376384574da7a7800b3dd64125badf008a16dd100e20d1b"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.156056 4782 generic.go:334] "Generic (PLEG): container finished" podID="a9b75d8c-9435-483f-8e95-97690314cfb5" containerID="0e49e7a8577a45cc63b62f6e59ab36faf5118b4383cec84de5d8b281d39fd041" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.156130 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-627f-account-create-update-h6hdk" event={"ID":"a9b75d8c-9435-483f-8e95-97690314cfb5","Type":"ContainerDied","Data":"0e49e7a8577a45cc63b62f6e59ab36faf5118b4383cec84de5d8b281d39fd041"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.186045 4782 generic.go:334] "Generic (PLEG): container finished" podID="07bbffca-46a4-4693-ae3f-011a5ee0e317" containerID="7930f426b6752d1ae7cd1f189cd59a47f3d0e3b099f35ef79f6b78e86ed5ab0d" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.186145 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" event={"ID":"07bbffca-46a4-4693-ae3f-011a5ee0e317","Type":"ContainerDied","Data":"7930f426b6752d1ae7cd1f189cd59a47f3d0e3b099f35ef79f6b78e86ed5ab0d"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.186172 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" event={"ID":"07bbffca-46a4-4693-ae3f-011a5ee0e317","Type":"ContainerStarted","Data":"3299d9c6ed3caafaaffaf3c18e551c63a3c2d8756d2d26fa5022af14102dc560"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.195692 4782 generic.go:334] "Generic (PLEG): container finished" podID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" containerID="23b00d976eb1b671e56fad532c67af4f3f0fb48695bf8d78a64de1654d16975f" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.195756 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jnw6j" event={"ID":"c5eccd3e-f895-4c2f-a1e5-c337a89d2439","Type":"ContainerDied","Data":"23b00d976eb1b671e56fad532c67af4f3f0fb48695bf8d78a64de1654d16975f"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.214576 4782 generic.go:334] "Generic (PLEG): container finished" podID="8b55df6c-8971-415a-a934-0ec48a149b81" containerID="938ebd6bbe46ccc6431b3d92e3b6f8803ade372fd58b9fe07b9f065675fc25c4" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.214686 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j8z8n" event={"ID":"8b55df6c-8971-415a-a934-0ec48a149b81","Type":"ContainerDied","Data":"938ebd6bbe46ccc6431b3d92e3b6f8803ade372fd58b9fe07b9f065675fc25c4"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.220025 4782 generic.go:334] "Generic (PLEG): container finished" podID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" containerID="59303b0d2ccdb82f829b283e200498f7a3b29c09b53180da767f3025ed87821c" exitCode=0 Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.220159 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" event={"ID":"0abc6f3c-1f7d-4f48-8beb-205307984cdc","Type":"ContainerDied","Data":"59303b0d2ccdb82f829b283e200498f7a3b29c09b53180da767f3025ed87821c"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.220361 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" event={"ID":"0abc6f3c-1f7d-4f48-8beb-205307984cdc","Type":"ContainerStarted","Data":"6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe"} Feb 02 10:58:54 crc kubenswrapper[4782]: I0202 10:58:54.831579 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" path="/var/lib/kubelet/pods/64a58e87-7403-40ee-804f-3ddd256a166a/volumes" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.662856 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.731252 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") pod \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.731350 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") pod \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\" (UID: \"0abc6f3c-1f7d-4f48-8beb-205307984cdc\") " Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.733047 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0abc6f3c-1f7d-4f48-8beb-205307984cdc" (UID: "0abc6f3c-1f7d-4f48-8beb-205307984cdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.774002 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj" (OuterVolumeSpecName: "kube-api-access-tvppj") pod "0abc6f3c-1f7d-4f48-8beb-205307984cdc" (UID: "0abc6f3c-1f7d-4f48-8beb-205307984cdc"). InnerVolumeSpecName "kube-api-access-tvppj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.835759 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0abc6f3c-1f7d-4f48-8beb-205307984cdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.835781 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvppj\" (UniqueName: \"kubernetes.io/projected/0abc6f3c-1f7d-4f48-8beb-205307984cdc-kube-api-access-tvppj\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.899440 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.908848 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.931577 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.963310 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:55 crc kubenswrapper[4782]: I0202 10:58:55.975400 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038032 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") pod \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038421 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") pod \"07bbffca-46a4-4693-ae3f-011a5ee0e317\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038468 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") pod \"a9b75d8c-9435-483f-8e95-97690314cfb5\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038553 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") pod \"07bbffca-46a4-4693-ae3f-011a5ee0e317\" (UID: \"07bbffca-46a4-4693-ae3f-011a5ee0e317\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038737 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") pod \"6a9a0fe2-4862-47e1-91d0-553d95235f39\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038790 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") pod \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\" (UID: \"c5eccd3e-f895-4c2f-a1e5-c337a89d2439\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038842 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") pod \"8b55df6c-8971-415a-a934-0ec48a149b81\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038890 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") pod \"a9b75d8c-9435-483f-8e95-97690314cfb5\" (UID: \"a9b75d8c-9435-483f-8e95-97690314cfb5\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038884 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5eccd3e-f895-4c2f-a1e5-c337a89d2439" (UID: "c5eccd3e-f895-4c2f-a1e5-c337a89d2439"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038917 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") pod \"8b55df6c-8971-415a-a934-0ec48a149b81\" (UID: \"8b55df6c-8971-415a-a934-0ec48a149b81\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.038940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") pod \"6a9a0fe2-4862-47e1-91d0-553d95235f39\" (UID: \"6a9a0fe2-4862-47e1-91d0-553d95235f39\") " Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.039225 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a9a0fe2-4862-47e1-91d0-553d95235f39" (UID: "6a9a0fe2-4862-47e1-91d0-553d95235f39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.039419 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a9a0fe2-4862-47e1-91d0-553d95235f39-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.039435 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.040072 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07bbffca-46a4-4693-ae3f-011a5ee0e317" (UID: "07bbffca-46a4-4693-ae3f-011a5ee0e317"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.041657 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9b75d8c-9435-483f-8e95-97690314cfb5" (UID: "a9b75d8c-9435-483f-8e95-97690314cfb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.043070 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b55df6c-8971-415a-a934-0ec48a149b81" (UID: "8b55df6c-8971-415a-a934-0ec48a149b81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.043487 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc" (OuterVolumeSpecName: "kube-api-access-cr2lc") pod "8b55df6c-8971-415a-a934-0ec48a149b81" (UID: "8b55df6c-8971-415a-a934-0ec48a149b81"). InnerVolumeSpecName "kube-api-access-cr2lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.044379 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4" (OuterVolumeSpecName: "kube-api-access-mxnv4") pod "a9b75d8c-9435-483f-8e95-97690314cfb5" (UID: "a9b75d8c-9435-483f-8e95-97690314cfb5"). InnerVolumeSpecName "kube-api-access-mxnv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.044745 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk" (OuterVolumeSpecName: "kube-api-access-4pbvk") pod "6a9a0fe2-4862-47e1-91d0-553d95235f39" (UID: "6a9a0fe2-4862-47e1-91d0-553d95235f39"). InnerVolumeSpecName "kube-api-access-4pbvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.045141 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48" (OuterVolumeSpecName: "kube-api-access-xmz48") pod "07bbffca-46a4-4693-ae3f-011a5ee0e317" (UID: "07bbffca-46a4-4693-ae3f-011a5ee0e317"). InnerVolumeSpecName "kube-api-access-xmz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.053042 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs" (OuterVolumeSpecName: "kube-api-access-c6lxs") pod "c5eccd3e-f895-4c2f-a1e5-c337a89d2439" (UID: "c5eccd3e-f895-4c2f-a1e5-c337a89d2439"). InnerVolumeSpecName "kube-api-access-c6lxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140460 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07bbffca-46a4-4693-ae3f-011a5ee0e317-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140686 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6lxs\" (UniqueName: \"kubernetes.io/projected/c5eccd3e-f895-4c2f-a1e5-c337a89d2439-kube-api-access-c6lxs\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140750 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr2lc\" (UniqueName: \"kubernetes.io/projected/8b55df6c-8971-415a-a934-0ec48a149b81-kube-api-access-cr2lc\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140806 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b75d8c-9435-483f-8e95-97690314cfb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140885 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b55df6c-8971-415a-a934-0ec48a149b81-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140937 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pbvk\" (UniqueName: \"kubernetes.io/projected/6a9a0fe2-4862-47e1-91d0-553d95235f39-kube-api-access-4pbvk\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.140998 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmz48\" (UniqueName: \"kubernetes.io/projected/07bbffca-46a4-4693-ae3f-011a5ee0e317-kube-api-access-xmz48\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.141052 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxnv4\" (UniqueName: \"kubernetes.io/projected/a9b75d8c-9435-483f-8e95-97690314cfb5-kube-api-access-mxnv4\") on node \"crc\" DevicePath \"\"" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.238134 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jnw6j" event={"ID":"c5eccd3e-f895-4c2f-a1e5-c337a89d2439","Type":"ContainerDied","Data":"2b78e6443e5dee55482e6b131b07bfb875d511a978bb0f266a573d8cf761933e"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.238185 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jnw6j" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.238206 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b78e6443e5dee55482e6b131b07bfb875d511a978bb0f266a573d8cf761933e" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.240704 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j8z8n" event={"ID":"8b55df6c-8971-415a-a934-0ec48a149b81","Type":"ContainerDied","Data":"8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.240756 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j8z8n" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.240765 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb924e1d61694d7578b5162919749cffbf5bdee957855b42828911aa5341f31" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.243956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" event={"ID":"0abc6f3c-1f7d-4f48-8beb-205307984cdc","Type":"ContainerDied","Data":"6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.244096 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6639261a2c919bfdd00795d8e79e67bfc4eb5005a72ee2ddb48aeefee9db27fe" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.244006 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9147-account-create-update-qcs9t" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.246308 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-964hl" event={"ID":"6a9a0fe2-4862-47e1-91d0-553d95235f39","Type":"ContainerDied","Data":"3c54228f50eb200a684b5560277db9357aea7a42d6d5f73377c7a90387ccfc5b"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.246353 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c54228f50eb200a684b5560277db9357aea7a42d6d5f73377c7a90387ccfc5b" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.246404 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-964hl" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.256721 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-627f-account-create-update-h6hdk" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.256741 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-627f-account-create-update-h6hdk" event={"ID":"a9b75d8c-9435-483f-8e95-97690314cfb5","Type":"ContainerDied","Data":"11b0ee4f356c606379fc2116b6e57a4beff8c051011c9cabd92f2c5770470a9b"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.257328 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b0ee4f356c606379fc2116b6e57a4beff8c051011c9cabd92f2c5770470a9b" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.258652 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" event={"ID":"07bbffca-46a4-4693-ae3f-011a5ee0e317","Type":"ContainerDied","Data":"3299d9c6ed3caafaaffaf3c18e551c63a3c2d8756d2d26fa5022af14102dc560"} Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.258684 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3299d9c6ed3caafaaffaf3c18e551c63a3c2d8756d2d26fa5022af14102dc560" Feb 02 10:58:56 crc kubenswrapper[4782]: I0202 10:58:56.258743 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e7e-account-create-update-n4kct" Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.307188 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.333345 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-555cfb6c68-sntkc" Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.432026 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.432264 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-54577c875b-pcjgd" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-log" containerID="cri-o://ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" gracePeriod=30 Feb 02 10:58:57 crc kubenswrapper[4782]: I0202 10:58:57.432738 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-54577c875b-pcjgd" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-api" containerID="cri-o://b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" gracePeriod=30 Feb 02 10:58:58 crc kubenswrapper[4782]: I0202 10:58:58.275713 4782 generic.go:334] "Generic (PLEG): container finished" podID="060c1eb2-7773-4122-8725-bf421f0feaac" containerID="ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" exitCode=143 Feb 02 10:58:58 crc kubenswrapper[4782]: I0202 10:58:58.275806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerDied","Data":"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d"} Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.036895 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146690 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146827 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146872 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.146998 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.147078 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.147138 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") pod \"060c1eb2-7773-4122-8725-bf421f0feaac\" (UID: \"060c1eb2-7773-4122-8725-bf421f0feaac\") " Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.148711 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs" (OuterVolumeSpecName: "logs") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.155833 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm" (OuterVolumeSpecName: "kube-api-access-p6nnm") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "kube-api-access-p6nnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.155964 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts" (OuterVolumeSpecName: "scripts") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.204164 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.223078 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data" (OuterVolumeSpecName: "config-data") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249459 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249756 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060c1eb2-7773-4122-8725-bf421f0feaac-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249828 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6nnm\" (UniqueName: \"kubernetes.io/projected/060c1eb2-7773-4122-8725-bf421f0feaac-kube-api-access-p6nnm\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249897 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.249963 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.253488 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.259747 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "060c1eb2-7773-4122-8725-bf421f0feaac" (UID: "060c1eb2-7773-4122-8725-bf421f0feaac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311221 4782 generic.go:334] "Generic (PLEG): container finished" podID="060c1eb2-7773-4122-8725-bf421f0feaac" containerID="b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" exitCode=0 Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerDied","Data":"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5"} Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311322 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54577c875b-pcjgd" event={"ID":"060c1eb2-7773-4122-8725-bf421f0feaac","Type":"ContainerDied","Data":"cc82b2ae4f32dd0c9b66e6fd35aca7b63dafb5ecb405dc4f5284a0fab8cac1a7"} Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311363 4782 scope.go:117] "RemoveContainer" containerID="b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.311592 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54577c875b-pcjgd" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.339328 4782 scope.go:117] "RemoveContainer" containerID="ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.352861 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.352916 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060c1eb2-7773-4122-8725-bf421f0feaac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.362513 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.372099 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-54577c875b-pcjgd"] Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.373810 4782 scope.go:117] "RemoveContainer" containerID="b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" Feb 02 10:59:01 crc kubenswrapper[4782]: E0202 10:59:01.374372 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5\": container with ID starting with b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5 not found: ID does not exist" containerID="b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.374836 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5"} err="failed to get container status \"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5\": rpc error: code = NotFound desc = could not find container \"b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5\": container with ID starting with b6a520c74e05f199aa469fca465c59c45232f5f041f26f9b6fdd8b54d56904e5 not found: ID does not exist" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.374865 4782 scope.go:117] "RemoveContainer" containerID="ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" Feb 02 10:59:01 crc kubenswrapper[4782]: E0202 10:59:01.375271 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d\": container with ID starting with ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d not found: ID does not exist" containerID="ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d" Feb 02 10:59:01 crc kubenswrapper[4782]: I0202 10:59:01.375343 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d"} err="failed to get container status \"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d\": rpc error: code = NotFound desc = could not find container \"ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d\": container with ID starting with ec97c894219369704f6d577f2813b7df3479ffdca3180ed17dd5502cd5bb558d not found: ID does not exist" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.158808 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159218 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b75d8c-9435-483f-8e95-97690314cfb5" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159235 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b75d8c-9435-483f-8e95-97690314cfb5" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159252 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-api" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159261 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-api" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159274 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9a0fe2-4862-47e1-91d0-553d95235f39" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159281 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9a0fe2-4862-47e1-91d0-553d95235f39" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159301 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-log" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159308 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-log" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159321 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159328 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159338 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-api" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159345 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-api" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159353 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b55df6c-8971-415a-a934-0ec48a149b81" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159359 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b55df6c-8971-415a-a934-0ec48a149b81" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159369 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159376 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159390 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07bbffca-46a4-4693-ae3f-011a5ee0e317" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159397 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="07bbffca-46a4-4693-ae3f-011a5ee0e317" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: E0202 10:59:02.159418 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-httpd" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159425 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-httpd" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159616 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9a0fe2-4862-47e1-91d0-553d95235f39" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159684 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b55df6c-8971-415a-a934-0ec48a149b81" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159704 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b75d8c-9435-483f-8e95-97690314cfb5" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159721 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-httpd" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159738 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a58e87-7403-40ee-804f-3ddd256a166a" containerName="neutron-api" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159755 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" containerName="mariadb-database-create" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159765 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-log" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159779 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159787 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" containerName="placement-api" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.159810 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="07bbffca-46a4-4693-ae3f-011a5ee0e317" containerName="mariadb-account-create-update" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.160500 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.162235 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.162748 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nhbbk" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.162937 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.174480 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.269198 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.269250 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.269421 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.269470 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.372205 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.372283 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.372782 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.373067 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.376814 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.377333 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.377442 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.399383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") pod \"nova-cell0-conductor-db-sync-lcdcm\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.479783 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:02 crc kubenswrapper[4782]: I0202 10:59:02.832273 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="060c1eb2-7773-4122-8725-bf421f0feaac" path="/var/lib/kubelet/pods/060c1eb2-7773-4122-8725-bf421f0feaac/volumes" Feb 02 10:59:03 crc kubenswrapper[4782]: I0202 10:59:03.009068 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 10:59:03 crc kubenswrapper[4782]: I0202 10:59:03.344020 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" event={"ID":"f0b52751-0177-4fa7-8d87-fca1cab9a096","Type":"ContainerStarted","Data":"9ccdb7941abab6279a97836bdd10fb3890d96e1f11f67732e40927619634d91c"} Feb 02 10:59:08 crc kubenswrapper[4782]: I0202 10:59:08.333235 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 10:59:10 crc kubenswrapper[4782]: I0202 10:59:10.406244 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" event={"ID":"f0b52751-0177-4fa7-8d87-fca1cab9a096","Type":"ContainerStarted","Data":"c9a22a15fdf9c10f8fa6ebae4f0ac6052d277f6b5e54ac311112c99327d4ce45"} Feb 02 10:59:10 crc kubenswrapper[4782]: I0202 10:59:10.425941 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" podStartSLOduration=1.33637636 podStartE2EDuration="8.425918088s" podCreationTimestamp="2026-02-02 10:59:02 +0000 UTC" firstStartedPulling="2026-02-02 10:59:03.009470639 +0000 UTC m=+1222.893663355" lastFinishedPulling="2026-02-02 10:59:10.099012327 +0000 UTC m=+1229.983205083" observedRunningTime="2026-02-02 10:59:10.423229811 +0000 UTC m=+1230.307422527" watchObservedRunningTime="2026-02-02 10:59:10.425918088 +0000 UTC m=+1230.310110794" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.409175 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493015 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerID="533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" exitCode=137 Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493061 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1"} Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493093 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce5deca8-5a47-4769-9518-5cb398a7cf5c","Type":"ContainerDied","Data":"4b9e3570e4603fe01210598f959523b2d00ac48728c953fa2f230b5e90152b83"} Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493117 4782 scope.go:117] "RemoveContainer" containerID="533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.493302 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499494 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499560 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499616 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499737 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499785 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499834 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.499937 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") pod \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\" (UID: \"ce5deca8-5a47-4769-9518-5cb398a7cf5c\") " Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.500797 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.501096 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.522073 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts" (OuterVolumeSpecName: "scripts") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.531921 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9" (OuterVolumeSpecName: "kube-api-access-89hm9") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "kube-api-access-89hm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.534101 4782 scope.go:117] "RemoveContainer" containerID="04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.539849 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602238 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602274 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89hm9\" (UniqueName: \"kubernetes.io/projected/ce5deca8-5a47-4769-9518-5cb398a7cf5c-kube-api-access-89hm9\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602286 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602298 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce5deca8-5a47-4769-9518-5cb398a7cf5c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602306 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.602343 4782 scope.go:117] "RemoveContainer" containerID="b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.611066 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.623447 4782 scope.go:117] "RemoveContainer" containerID="84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.639463 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data" (OuterVolumeSpecName: "config-data") pod "ce5deca8-5a47-4769-9518-5cb398a7cf5c" (UID: "ce5deca8-5a47-4769-9518-5cb398a7cf5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.646848 4782 scope.go:117] "RemoveContainer" containerID="533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.647380 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1\": container with ID starting with 533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1 not found: ID does not exist" containerID="533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.647511 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1"} err="failed to get container status \"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1\": rpc error: code = NotFound desc = could not find container \"533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1\": container with ID starting with 533f36a54e23edb06784e7156799b29a70b4f783a402104a42333feba241f8a1 not found: ID does not exist" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.647619 4782 scope.go:117] "RemoveContainer" containerID="04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.648284 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003\": container with ID starting with 04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003 not found: ID does not exist" containerID="04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.648314 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003"} err="failed to get container status \"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003\": rpc error: code = NotFound desc = could not find container \"04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003\": container with ID starting with 04057ff60e5e1f3323e182a26b20a3193665c89f3705db726b599dceb9bc3003 not found: ID does not exist" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.648335 4782 scope.go:117] "RemoveContainer" containerID="b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.648878 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6\": container with ID starting with b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6 not found: ID does not exist" containerID="b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.648987 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6"} err="failed to get container status \"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6\": rpc error: code = NotFound desc = could not find container \"b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6\": container with ID starting with b01aa0979ebe3da1ada19bd5f3faeaeb4dcb6479051123506e2fa0d8ca35ceb6 not found: ID does not exist" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.649056 4782 scope.go:117] "RemoveContainer" containerID="84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.649379 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101\": container with ID starting with 84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101 not found: ID does not exist" containerID="84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.649404 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101"} err="failed to get container status \"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101\": rpc error: code = NotFound desc = could not find container \"84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101\": container with ID starting with 84ad17b4a6850cdf253af67fe5f1311e581399106959c397abb3583346389101 not found: ID does not exist" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.745483 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.745515 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5deca8-5a47-4769-9518-5cb398a7cf5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.843443 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.863738 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.873538 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.874000 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-notification-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874025 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-notification-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.874053 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874063 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.874081 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-central-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874089 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-central-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: E0202 10:59:20.874108 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="sg-core" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874115 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="sg-core" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874502 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-central-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874542 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="ceilometer-notification-agent" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874559 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="sg-core" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.874577 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" containerName="proxy-httpd" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.879240 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.885354 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.888836 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 10:59:20 crc kubenswrapper[4782]: I0202 10:59:20.889262 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.050994 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051315 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051375 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051403 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051612 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.051681 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.152986 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153026 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153098 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153131 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153151 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153198 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153217 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153867 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.153958 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.160375 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.161694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.163692 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.167265 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.175876 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") pod \"ceilometer-0\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.205780 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 10:59:21 crc kubenswrapper[4782]: I0202 10:59:21.671013 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:21 crc kubenswrapper[4782]: W0202 10:59:21.677884 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28015087_432c_4906_8c57_406f5bf4371b.slice/crio-130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248 WatchSource:0}: Error finding container 130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248: Status 404 returned error can't find the container with id 130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248 Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.517762 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff"} Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.518224 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248"} Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.836099 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5deca8-5a47-4769-9518-5cb398a7cf5c" path="/var/lib/kubelet/pods/ce5deca8-5a47-4769-9518-5cb398a7cf5c/volumes" Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.951201 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:59:22 crc kubenswrapper[4782]: I0202 10:59:22.951528 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:59:23 crc kubenswrapper[4782]: I0202 10:59:23.527916 4782 generic.go:334] "Generic (PLEG): container finished" podID="f0b52751-0177-4fa7-8d87-fca1cab9a096" containerID="c9a22a15fdf9c10f8fa6ebae4f0ac6052d277f6b5e54ac311112c99327d4ce45" exitCode=0 Feb 02 10:59:23 crc kubenswrapper[4782]: I0202 10:59:23.528017 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" event={"ID":"f0b52751-0177-4fa7-8d87-fca1cab9a096","Type":"ContainerDied","Data":"c9a22a15fdf9c10f8fa6ebae4f0ac6052d277f6b5e54ac311112c99327d4ce45"} Feb 02 10:59:23 crc kubenswrapper[4782]: I0202 10:59:23.533758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71"} Feb 02 10:59:24 crc kubenswrapper[4782]: I0202 10:59:24.544957 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9"} Feb 02 10:59:24 crc kubenswrapper[4782]: I0202 10:59:24.954599 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.036443 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") pod \"f0b52751-0177-4fa7-8d87-fca1cab9a096\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.036722 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") pod \"f0b52751-0177-4fa7-8d87-fca1cab9a096\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.036778 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") pod \"f0b52751-0177-4fa7-8d87-fca1cab9a096\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.036899 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") pod \"f0b52751-0177-4fa7-8d87-fca1cab9a096\" (UID: \"f0b52751-0177-4fa7-8d87-fca1cab9a096\") " Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.046809 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts" (OuterVolumeSpecName: "scripts") pod "f0b52751-0177-4fa7-8d87-fca1cab9a096" (UID: "f0b52751-0177-4fa7-8d87-fca1cab9a096"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.046854 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr" (OuterVolumeSpecName: "kube-api-access-k8wsr") pod "f0b52751-0177-4fa7-8d87-fca1cab9a096" (UID: "f0b52751-0177-4fa7-8d87-fca1cab9a096"). InnerVolumeSpecName "kube-api-access-k8wsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.072019 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0b52751-0177-4fa7-8d87-fca1cab9a096" (UID: "f0b52751-0177-4fa7-8d87-fca1cab9a096"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.072158 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data" (OuterVolumeSpecName: "config-data") pod "f0b52751-0177-4fa7-8d87-fca1cab9a096" (UID: "f0b52751-0177-4fa7-8d87-fca1cab9a096"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.139323 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.139591 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.139751 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wsr\" (UniqueName: \"kubernetes.io/projected/f0b52751-0177-4fa7-8d87-fca1cab9a096-kube-api-access-k8wsr\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.139857 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b52751-0177-4fa7-8d87-fca1cab9a096-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.570802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" event={"ID":"f0b52751-0177-4fa7-8d87-fca1cab9a096","Type":"ContainerDied","Data":"9ccdb7941abab6279a97836bdd10fb3890d96e1f11f67732e40927619634d91c"} Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.570844 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ccdb7941abab6279a97836bdd10fb3890d96e1f11f67732e40927619634d91c" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.570882 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lcdcm" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.663100 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:59:25 crc kubenswrapper[4782]: E0202 10:59:25.663540 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b52751-0177-4fa7-8d87-fca1cab9a096" containerName="nova-cell0-conductor-db-sync" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.663565 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b52751-0177-4fa7-8d87-fca1cab9a096" containerName="nova-cell0-conductor-db-sync" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.663812 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b52751-0177-4fa7-8d87-fca1cab9a096" containerName="nova-cell0-conductor-db-sync" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.664475 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.666253 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nhbbk" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.667519 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.678332 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.751489 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.752935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgp2s\" (UniqueName: \"kubernetes.io/projected/ea60fa1f-5751-4f93-8726-ce0c4be54577-kube-api-access-zgp2s\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.753164 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.854979 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgp2s\" (UniqueName: \"kubernetes.io/projected/ea60fa1f-5751-4f93-8726-ce0c4be54577-kube-api-access-zgp2s\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.855091 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.855135 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.860774 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.865464 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea60fa1f-5751-4f93-8726-ce0c4be54577-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.873475 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgp2s\" (UniqueName: \"kubernetes.io/projected/ea60fa1f-5751-4f93-8726-ce0c4be54577-kube-api-access-zgp2s\") pod \"nova-cell0-conductor-0\" (UID: \"ea60fa1f-5751-4f93-8726-ce0c4be54577\") " pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:25 crc kubenswrapper[4782]: I0202 10:59:25.989224 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.487529 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.586080 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea60fa1f-5751-4f93-8726-ce0c4be54577","Type":"ContainerStarted","Data":"eae176bd25849e23d16488dbd8df7e5b07b132ca266dc4acf10a7eb4d7f78d50"} Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.591157 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerStarted","Data":"f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98"} Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.591556 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 10:59:26 crc kubenswrapper[4782]: I0202 10:59:26.614601 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.53184408 podStartE2EDuration="6.614580396s" podCreationTimestamp="2026-02-02 10:59:20 +0000 UTC" firstStartedPulling="2026-02-02 10:59:21.680499168 +0000 UTC m=+1241.564691894" lastFinishedPulling="2026-02-02 10:59:25.763235494 +0000 UTC m=+1245.647428210" observedRunningTime="2026-02-02 10:59:26.609641004 +0000 UTC m=+1246.493833740" watchObservedRunningTime="2026-02-02 10:59:26.614580396 +0000 UTC m=+1246.498773112" Feb 02 10:59:27 crc kubenswrapper[4782]: I0202 10:59:27.600972 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ea60fa1f-5751-4f93-8726-ce0c4be54577","Type":"ContainerStarted","Data":"24403be4fefe81ba978fb152610f0aa6f4ed6fbd4dd7f5010c25f8a6bc48717f"} Feb 02 10:59:27 crc kubenswrapper[4782]: I0202 10:59:27.601445 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:27 crc kubenswrapper[4782]: I0202 10:59:27.627304 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.627285695 podStartE2EDuration="2.627285695s" podCreationTimestamp="2026-02-02 10:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:27.619366918 +0000 UTC m=+1247.503559634" watchObservedRunningTime="2026-02-02 10:59:27.627285695 +0000 UTC m=+1247.511478401" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.018290 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.498584 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.499906 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.502188 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.502369 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.516065 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.582900 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.583001 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.583031 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.583162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.685482 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.685636 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.685698 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.685725 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.708559 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.715194 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.727571 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.728046 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") pod \"nova-cell0-cell-mapping-5wtv6\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.733795 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.735007 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.741084 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.797852 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.821389 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.887743 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.889521 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.894625 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.902808 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.902997 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.899190 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.918725 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.947747 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.949514 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:36 crc kubenswrapper[4782]: I0202 10:59:36.958638 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.005179 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.005843 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006312 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006356 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.006401 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.035843 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.042361 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.059419 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.065314 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") pod \"nova-scheduler-0\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.098751 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.100487 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.106080 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.119468 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.119712 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.119943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120041 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120090 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120146 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120307 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.120341 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.121808 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.122929 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.135874 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.144837 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.158393 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.173739 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") pod \"nova-api-0\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.233899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.233955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.233978 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.233997 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.234029 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.234049 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.234101 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.238005 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.240711 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.246315 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.253858 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.258995 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.286975 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.288573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") pod \"nova-metadata-0\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335548 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335620 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335659 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335678 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335707 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335736 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.335756 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.340144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.341407 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.364850 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.367179 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.408119 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.451126 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.462403 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.462534 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.462589 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.462948 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.463004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.464180 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.464208 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.465936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.466374 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.488473 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") pod \"dnsmasq-dns-566b5b7845-ztd4g\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.589834 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.652226 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.954041 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:37 crc kubenswrapper[4782]: I0202 10:59:37.990131 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.113295 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.119284 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.128544 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.128848 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.135471 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.187966 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.188024 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.188069 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.188514 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.283807 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.290382 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.290669 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.290713 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.290772 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.296718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.299025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.301196 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.312848 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") pod \"nova-cell1-conductor-db-sync-fb5lz\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.408785 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.427393 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.449724 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.789787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerStarted","Data":"259595f6180ca19619e9584218a7595adeeee90339f24e0dc184cf1c1a9dd391"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.795156 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerStarted","Data":"0204128d8c9334e74821e1d19a2b03e4db6f2b1212375a7ff2103159f638b687"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.795196 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerStarted","Data":"c2eed060c399f072bbf74377e28b3c99e19fd6ac4fff9760114980a82bb5c7d6"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.795213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerStarted","Data":"40e8da6bacf81b0807d18f0e00cf0e73a4f50618c9435b4a018769c28384c37e"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.797302 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"042e7186-1c2e-4a12-b06e-4f99a5d78083","Type":"ContainerStarted","Data":"b7dc98ebc02a669721b0d5719711fa47b0524703f51ae30735f586363eb204e3"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.810222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e271d5c6-aeb4-4181-8712-3c80349c7900","Type":"ContainerStarted","Data":"6984fb97283161100b3f2ea9d0020d436f7eec946eff63636e874e1d070b241f"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.853392 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5wtv6" event={"ID":"baa0ea9b-5d59-4094-a259-2f841d40db2c","Type":"ContainerStarted","Data":"730902e09b299cdd00a01ece9539dce44aec0c2aaecd122d8a4c41d8be4117fb"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.853454 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5wtv6" event={"ID":"baa0ea9b-5d59-4094-a259-2f841d40db2c","Type":"ContainerStarted","Data":"7dccd6a446de27cf17be3c6fed64dab86cfe71404c05a7dac2e3ac65627c370a"} Feb 02 10:59:38 crc kubenswrapper[4782]: I0202 10:59:38.891457 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5wtv6" podStartSLOduration=2.891436883 podStartE2EDuration="2.891436883s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:38.885355909 +0000 UTC m=+1258.769548625" watchObservedRunningTime="2026-02-02 10:59:38.891436883 +0000 UTC m=+1258.775629599" Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.061018 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.868204 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" event={"ID":"5d87918f-7c3d-4932-a4bd-18a2cf9fc199","Type":"ContainerStarted","Data":"8185fbc7b3d30cf6bb76bc01518fb63e05726e26ac97fb50e13e8ad1440798ce"} Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.868264 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" event={"ID":"5d87918f-7c3d-4932-a4bd-18a2cf9fc199","Type":"ContainerStarted","Data":"abce54fb83bfbed2484a3ffad62e6093bcdfe61f5c810c843b4fd77e933662cd"} Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.875484 4782 generic.go:334] "Generic (PLEG): container finished" podID="639e44fb-7faa-4907-b02e-8c985f846925" containerID="c2eed060c399f072bbf74377e28b3c99e19fd6ac4fff9760114980a82bb5c7d6" exitCode=0 Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.876230 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerDied","Data":"c2eed060c399f072bbf74377e28b3c99e19fd6ac4fff9760114980a82bb5c7d6"} Feb 02 10:59:39 crc kubenswrapper[4782]: I0202 10:59:39.898112 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" podStartSLOduration=1.89809129 podStartE2EDuration="1.89809129s" podCreationTimestamp="2026-02-02 10:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:39.893009344 +0000 UTC m=+1259.777202060" watchObservedRunningTime="2026-02-02 10:59:39.89809129 +0000 UTC m=+1259.782284006" Feb 02 10:59:40 crc kubenswrapper[4782]: I0202 10:59:40.477305 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:40 crc kubenswrapper[4782]: I0202 10:59:40.513048 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.920843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerStarted","Data":"9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.921503 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.923473 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"042e7186-1c2e-4a12-b06e-4f99a5d78083","Type":"ContainerStarted","Data":"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.923621 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" gracePeriod=30 Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.926434 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e271d5c6-aeb4-4181-8712-3c80349c7900","Type":"ContainerStarted","Data":"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.930020 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerStarted","Data":"2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.930054 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerStarted","Data":"8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.932086 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerStarted","Data":"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.932118 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerStarted","Data":"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879"} Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.932240 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-log" containerID="cri-o://fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" gracePeriod=30 Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.932524 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-metadata" containerID="cri-o://0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" gracePeriod=30 Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.954674 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" podStartSLOduration=5.954631819 podStartE2EDuration="5.954631819s" podCreationTimestamp="2026-02-02 10:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:42.946095714 +0000 UTC m=+1262.830288430" watchObservedRunningTime="2026-02-02 10:59:42.954631819 +0000 UTC m=+1262.838824535" Feb 02 10:59:42 crc kubenswrapper[4782]: I0202 10:59:42.977744 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.486599406 podStartE2EDuration="6.977724001s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="2026-02-02 10:59:38.292700228 +0000 UTC m=+1258.176892944" lastFinishedPulling="2026-02-02 10:59:41.783824823 +0000 UTC m=+1261.668017539" observedRunningTime="2026-02-02 10:59:42.970941377 +0000 UTC m=+1262.855134083" watchObservedRunningTime="2026-02-02 10:59:42.977724001 +0000 UTC m=+1262.861916717" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.023031 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.274196304 podStartE2EDuration="7.02300953s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="2026-02-02 10:59:38.034151362 +0000 UTC m=+1257.918344078" lastFinishedPulling="2026-02-02 10:59:41.782964588 +0000 UTC m=+1261.667157304" observedRunningTime="2026-02-02 10:59:43.003933643 +0000 UTC m=+1262.888126359" watchObservedRunningTime="2026-02-02 10:59:43.02300953 +0000 UTC m=+1262.907202246" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.028986 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.6753411099999997 podStartE2EDuration="7.028966051s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="2026-02-02 10:59:38.432248041 +0000 UTC m=+1258.316440747" lastFinishedPulling="2026-02-02 10:59:41.785872972 +0000 UTC m=+1261.670065688" observedRunningTime="2026-02-02 10:59:43.019839689 +0000 UTC m=+1262.904032405" watchObservedRunningTime="2026-02-02 10:59:43.028966051 +0000 UTC m=+1262.913158767" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.108913 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.298192733 podStartE2EDuration="7.108888764s" podCreationTimestamp="2026-02-02 10:59:36 +0000 UTC" firstStartedPulling="2026-02-02 10:59:37.967089279 +0000 UTC m=+1257.851281995" lastFinishedPulling="2026-02-02 10:59:41.77778531 +0000 UTC m=+1261.661978026" observedRunningTime="2026-02-02 10:59:43.093700648 +0000 UTC m=+1262.977893354" watchObservedRunningTime="2026-02-02 10:59:43.108888764 +0000 UTC m=+1262.993081480" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.878844 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.902964 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") pod \"113b7e86-63fb-403b-a297-14a38039065c\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.903010 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") pod \"113b7e86-63fb-403b-a297-14a38039065c\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.903077 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") pod \"113b7e86-63fb-403b-a297-14a38039065c\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.903195 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") pod \"113b7e86-63fb-403b-a297-14a38039065c\" (UID: \"113b7e86-63fb-403b-a297-14a38039065c\") " Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.903717 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs" (OuterVolumeSpecName: "logs") pod "113b7e86-63fb-403b-a297-14a38039065c" (UID: "113b7e86-63fb-403b-a297-14a38039065c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.904700 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b7e86-63fb-403b-a297-14a38039065c-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.918291 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf" (OuterVolumeSpecName: "kube-api-access-ntjpf") pod "113b7e86-63fb-403b-a297-14a38039065c" (UID: "113b7e86-63fb-403b-a297-14a38039065c"). InnerVolumeSpecName "kube-api-access-ntjpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.947581 4782 generic.go:334] "Generic (PLEG): container finished" podID="113b7e86-63fb-403b-a297-14a38039065c" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" exitCode=0 Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.948608 4782 generic.go:334] "Generic (PLEG): container finished" podID="113b7e86-63fb-403b-a297-14a38039065c" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" exitCode=143 Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.948329 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.948167 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerDied","Data":"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef"} Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.950077 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerDied","Data":"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879"} Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.950101 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"113b7e86-63fb-403b-a297-14a38039065c","Type":"ContainerDied","Data":"0204128d8c9334e74821e1d19a2b03e4db6f2b1212375a7ff2103159f638b687"} Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.950132 4782 scope.go:117] "RemoveContainer" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" Feb 02 10:59:43 crc kubenswrapper[4782]: I0202 10:59:43.968837 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data" (OuterVolumeSpecName: "config-data") pod "113b7e86-63fb-403b-a297-14a38039065c" (UID: "113b7e86-63fb-403b-a297-14a38039065c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.011232 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntjpf\" (UniqueName: \"kubernetes.io/projected/113b7e86-63fb-403b-a297-14a38039065c-kube-api-access-ntjpf\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.011271 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.029755 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "113b7e86-63fb-403b-a297-14a38039065c" (UID: "113b7e86-63fb-403b-a297-14a38039065c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.090220 4782 scope.go:117] "RemoveContainer" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.112904 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b7e86-63fb-403b-a297-14a38039065c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.113590 4782 scope.go:117] "RemoveContainer" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" Feb 02 10:59:44 crc kubenswrapper[4782]: E0202 10:59:44.114074 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": container with ID starting with 0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef not found: ID does not exist" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114117 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef"} err="failed to get container status \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": rpc error: code = NotFound desc = could not find container \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": container with ID starting with 0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef not found: ID does not exist" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114142 4782 scope.go:117] "RemoveContainer" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" Feb 02 10:59:44 crc kubenswrapper[4782]: E0202 10:59:44.114422 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": container with ID starting with fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879 not found: ID does not exist" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114443 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879"} err="failed to get container status \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": rpc error: code = NotFound desc = could not find container \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": container with ID starting with fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879 not found: ID does not exist" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114455 4782 scope.go:117] "RemoveContainer" containerID="0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114837 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef"} err="failed to get container status \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": rpc error: code = NotFound desc = could not find container \"0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef\": container with ID starting with 0af64db948e20404eefd909f7e9473ce78050410ecbf863ae128ad301224bdef not found: ID does not exist" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.114858 4782 scope.go:117] "RemoveContainer" containerID="fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.115126 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879"} err="failed to get container status \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": rpc error: code = NotFound desc = could not find container \"fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879\": container with ID starting with fbd2829c52f031567eaa963db4d618e76598edccf7c104e11d2b21333a466879 not found: ID does not exist" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.281781 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.291606 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.314827 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:44 crc kubenswrapper[4782]: E0202 10:59:44.315325 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-metadata" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.315359 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-metadata" Feb 02 10:59:44 crc kubenswrapper[4782]: E0202 10:59:44.315379 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-log" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.315385 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-log" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.315824 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-log" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.315850 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="113b7e86-63fb-403b-a297-14a38039065c" containerName="nova-metadata-metadata" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.317849 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.320910 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.321027 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.388167 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419383 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419516 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419579 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.419768 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521722 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521814 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.521844 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.522754 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.526217 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.526406 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.527111 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.545146 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") pod \"nova-metadata-0\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.639445 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:44 crc kubenswrapper[4782]: I0202 10:59:44.833598 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113b7e86-63fb-403b-a297-14a38039065c" path="/var/lib/kubelet/pods/113b7e86-63fb-403b-a297-14a38039065c/volumes" Feb 02 10:59:45 crc kubenswrapper[4782]: I0202 10:59:45.204068 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:45 crc kubenswrapper[4782]: I0202 10:59:45.975422 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerStarted","Data":"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191"} Feb 02 10:59:45 crc kubenswrapper[4782]: I0202 10:59:45.975796 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerStarted","Data":"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394"} Feb 02 10:59:45 crc kubenswrapper[4782]: I0202 10:59:45.975808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerStarted","Data":"ea6a6c08bd3087f1ba9f590b152caf3e7fec76c7a3cd69736de0e388760f5410"} Feb 02 10:59:46 crc kubenswrapper[4782]: I0202 10:59:46.008669 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.008638425 podStartE2EDuration="2.008638425s" podCreationTimestamp="2026-02-02 10:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:45.997961909 +0000 UTC m=+1265.882154615" watchObservedRunningTime="2026-02-02 10:59:46.008638425 +0000 UTC m=+1265.892831141" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.159249 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.159320 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.195960 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.368301 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.368362 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.453121 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.590789 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.662113 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:59:47 crc kubenswrapper[4782]: I0202 10:59:47.662379 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" containerID="cri-o://e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de" gracePeriod=10 Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.015098 4782 generic.go:334] "Generic (PLEG): container finished" podID="baa0ea9b-5d59-4094-a259-2f841d40db2c" containerID="730902e09b299cdd00a01ece9539dce44aec0c2aaecd122d8a4c41d8be4117fb" exitCode=0 Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.015210 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5wtv6" event={"ID":"baa0ea9b-5d59-4094-a259-2f841d40db2c","Type":"ContainerDied","Data":"730902e09b299cdd00a01ece9539dce44aec0c2aaecd122d8a4c41d8be4117fb"} Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.022186 4782 generic.go:334] "Generic (PLEG): container finished" podID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerID="e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de" exitCode=0 Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.023282 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerDied","Data":"e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de"} Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.082038 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.275290 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444497 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444604 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444768 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.444797 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") pod \"aeac5df4-fc17-4840-b777-4b20a71f603b\" (UID: \"aeac5df4-fc17-4840-b777-4b20a71f603b\") " Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.453778 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.454111 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.169:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.477901 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb" (OuterVolumeSpecName: "kube-api-access-cxbzb") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "kube-api-access-cxbzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.519273 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.520139 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config" (OuterVolumeSpecName: "config") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.542117 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.548844 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.548913 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxbzb\" (UniqueName: \"kubernetes.io/projected/aeac5df4-fc17-4840-b777-4b20a71f603b-kube-api-access-cxbzb\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.548926 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.548937 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-config\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.556158 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aeac5df4-fc17-4840-b777-4b20a71f603b" (UID: "aeac5df4-fc17-4840-b777-4b20a71f603b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 10:59:48 crc kubenswrapper[4782]: I0202 10:59:48.650893 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aeac5df4-fc17-4840-b777-4b20a71f603b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.041101 4782 generic.go:334] "Generic (PLEG): container finished" podID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" containerID="8185fbc7b3d30cf6bb76bc01518fb63e05726e26ac97fb50e13e8ad1440798ce" exitCode=0 Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.041193 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" event={"ID":"5d87918f-7c3d-4932-a4bd-18a2cf9fc199","Type":"ContainerDied","Data":"8185fbc7b3d30cf6bb76bc01518fb63e05726e26ac97fb50e13e8ad1440798ce"} Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.047701 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.055357 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" event={"ID":"aeac5df4-fc17-4840-b777-4b20a71f603b","Type":"ContainerDied","Data":"8c21bb8034d9faf7eb546bc39d481d9fb7112330466d208d373ff1d4cfc5503c"} Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.055419 4782 scope.go:117] "RemoveContainer" containerID="e8f8698969d1d3fb94fd61cad2a7db600e14b7bfd8f48ebf089d1152818d17de" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.084699 4782 scope.go:117] "RemoveContainer" containerID="235f00f5818c6de4755dcefb6a2d4359499a9277fdd4c9df3ba1b496dc87e676" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.117152 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.134419 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-lp4zt"] Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.450151 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.591324 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") pod \"baa0ea9b-5d59-4094-a259-2f841d40db2c\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.591567 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") pod \"baa0ea9b-5d59-4094-a259-2f841d40db2c\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.591816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") pod \"baa0ea9b-5d59-4094-a259-2f841d40db2c\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.591999 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") pod \"baa0ea9b-5d59-4094-a259-2f841d40db2c\" (UID: \"baa0ea9b-5d59-4094-a259-2f841d40db2c\") " Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.598711 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts" (OuterVolumeSpecName: "scripts") pod "baa0ea9b-5d59-4094-a259-2f841d40db2c" (UID: "baa0ea9b-5d59-4094-a259-2f841d40db2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.598846 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8" (OuterVolumeSpecName: "kube-api-access-tp6t8") pod "baa0ea9b-5d59-4094-a259-2f841d40db2c" (UID: "baa0ea9b-5d59-4094-a259-2f841d40db2c"). InnerVolumeSpecName "kube-api-access-tp6t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.628564 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baa0ea9b-5d59-4094-a259-2f841d40db2c" (UID: "baa0ea9b-5d59-4094-a259-2f841d40db2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.633134 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data" (OuterVolumeSpecName: "config-data") pod "baa0ea9b-5d59-4094-a259-2f841d40db2c" (UID: "baa0ea9b-5d59-4094-a259-2f841d40db2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.640421 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.643464 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.696072 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp6t8\" (UniqueName: \"kubernetes.io/projected/baa0ea9b-5d59-4094-a259-2f841d40db2c-kube-api-access-tp6t8\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.696388 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.696474 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:49 crc kubenswrapper[4782]: I0202 10:59:49.696544 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baa0ea9b-5d59-4094-a259-2f841d40db2c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.056836 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5wtv6" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.057016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5wtv6" event={"ID":"baa0ea9b-5d59-4094-a259-2f841d40db2c","Type":"ContainerDied","Data":"7dccd6a446de27cf17be3c6fed64dab86cfe71404c05a7dac2e3ac65627c370a"} Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.057541 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dccd6a446de27cf17be3c6fed64dab86cfe71404c05a7dac2e3ac65627c370a" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.260821 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.261073 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" containerID="cri-o://8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066" gracePeriod=30 Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.261475 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" containerID="cri-o://2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744" gracePeriod=30 Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.262368 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.262548 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" containerID="cri-o://d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" gracePeriod=30 Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.306375 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.448591 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.610693 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") pod \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.610766 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") pod \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.610866 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") pod \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.610940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") pod \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\" (UID: \"5d87918f-7c3d-4932-a4bd-18a2cf9fc199\") " Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.619875 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64" (OuterVolumeSpecName: "kube-api-access-9mc64") pod "5d87918f-7c3d-4932-a4bd-18a2cf9fc199" (UID: "5d87918f-7c3d-4932-a4bd-18a2cf9fc199"). InnerVolumeSpecName "kube-api-access-9mc64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.633749 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts" (OuterVolumeSpecName: "scripts") pod "5d87918f-7c3d-4932-a4bd-18a2cf9fc199" (UID: "5d87918f-7c3d-4932-a4bd-18a2cf9fc199"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.647589 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data" (OuterVolumeSpecName: "config-data") pod "5d87918f-7c3d-4932-a4bd-18a2cf9fc199" (UID: "5d87918f-7c3d-4932-a4bd-18a2cf9fc199"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.663104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d87918f-7c3d-4932-a4bd-18a2cf9fc199" (UID: "5d87918f-7c3d-4932-a4bd-18a2cf9fc199"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.712925 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.712970 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.712984 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mc64\" (UniqueName: \"kubernetes.io/projected/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-kube-api-access-9mc64\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.712997 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d87918f-7c3d-4932-a4bd-18a2cf9fc199-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:50 crc kubenswrapper[4782]: I0202 10:59:50.834728 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" path="/var/lib/kubelet/pods/aeac5df4-fc17-4840-b777-4b20a71f603b/volumes" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.067969 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" event={"ID":"5d87918f-7c3d-4932-a4bd-18a2cf9fc199","Type":"ContainerDied","Data":"abce54fb83bfbed2484a3ffad62e6093bcdfe61f5c810c843b4fd77e933662cd"} Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.068008 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abce54fb83bfbed2484a3ffad62e6093bcdfe61f5c810c843b4fd77e933662cd" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.068078 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fb5lz" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.070718 4782 generic.go:334] "Generic (PLEG): container finished" podID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerID="8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066" exitCode=143 Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.070815 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerDied","Data":"8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066"} Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.070901 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-log" containerID="cri-o://5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" gracePeriod=30 Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.071102 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-metadata" containerID="cri-o://9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" gracePeriod=30 Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.158996 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:59:51 crc kubenswrapper[4782]: E0202 10:59:51.159334 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159349 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" Feb 02 10:59:51 crc kubenswrapper[4782]: E0202 10:59:51.159360 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="init" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159367 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="init" Feb 02 10:59:51 crc kubenswrapper[4782]: E0202 10:59:51.159384 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa0ea9b-5d59-4094-a259-2f841d40db2c" containerName="nova-manage" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159390 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa0ea9b-5d59-4094-a259-2f841d40db2c" containerName="nova-manage" Feb 02 10:59:51 crc kubenswrapper[4782]: E0202 10:59:51.159401 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" containerName="nova-cell1-conductor-db-sync" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159407 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" containerName="nova-cell1-conductor-db-sync" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159584 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa0ea9b-5d59-4094-a259-2f841d40db2c" containerName="nova-manage" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159605 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.159616 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" containerName="nova-cell1-conductor-db-sync" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.160122 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.164292 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.176183 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.221195 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.323174 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.323297 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.323794 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2pf\" (UniqueName: \"kubernetes.io/projected/c8598880-0557-414a-bbb1-b5d0cdce0738-kube-api-access-st2pf\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.424906 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2pf\" (UniqueName: \"kubernetes.io/projected/c8598880-0557-414a-bbb1-b5d0cdce0738-kube-api-access-st2pf\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.425007 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.425050 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.430485 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.451747 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2pf\" (UniqueName: \"kubernetes.io/projected/c8598880-0557-414a-bbb1-b5d0cdce0738-kube-api-access-st2pf\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.461433 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8598880-0557-414a-bbb1-b5d0cdce0738-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c8598880-0557-414a-bbb1-b5d0cdce0738\") " pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.477087 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.557179 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.633992 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.634055 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.634139 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.634197 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.634234 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") pod \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\" (UID: \"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a\") " Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.640133 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs" (OuterVolumeSpecName: "logs") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.641896 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5" (OuterVolumeSpecName: "kube-api-access-2nrw5") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "kube-api-access-2nrw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.680409 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.692306 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data" (OuterVolumeSpecName: "config-data") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.703191 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" (UID: "1bada6f8-1a58-4afa-bbc5-7ad40ca5987a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736750 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736785 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nrw5\" (UniqueName: \"kubernetes.io/projected/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-kube-api-access-2nrw5\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736798 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736806 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.736815 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:51 crc kubenswrapper[4782]: I0202 10:59:51.999113 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.079059 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c8598880-0557-414a-bbb1-b5d0cdce0738","Type":"ContainerStarted","Data":"f91554cf092f64ca8b301e7cb53b71665640b08e814fdf84823f2b4944b60c90"} Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080635 4782 generic.go:334] "Generic (PLEG): container finished" podID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" exitCode=0 Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080680 4782 generic.go:334] "Generic (PLEG): container finished" podID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" exitCode=143 Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080701 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerDied","Data":"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191"} Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080726 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerDied","Data":"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394"} Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080736 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1bada6f8-1a58-4afa-bbc5-7ad40ca5987a","Type":"ContainerDied","Data":"ea6a6c08bd3087f1ba9f590b152caf3e7fec76c7a3cd69736de0e388760f5410"} Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080751 4782 scope.go:117] "RemoveContainer" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.080860 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.130450 4782 scope.go:117] "RemoveContainer" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.157879 4782 scope.go:117] "RemoveContainer" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.161170 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": container with ID starting with 9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191 not found: ID does not exist" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.161218 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191"} err="failed to get container status \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": rpc error: code = NotFound desc = could not find container \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": container with ID starting with 9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191 not found: ID does not exist" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.161248 4782 scope.go:117] "RemoveContainer" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.161610 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": container with ID starting with 5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394 not found: ID does not exist" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.161684 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394"} err="failed to get container status \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": rpc error: code = NotFound desc = could not find container \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": container with ID starting with 5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394 not found: ID does not exist" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.161751 4782 scope.go:117] "RemoveContainer" containerID="9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.163586 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191"} err="failed to get container status \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": rpc error: code = NotFound desc = could not find container \"9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191\": container with ID starting with 9b8ddb9ce5a9ed6520d8b0488975e1088031145ea5915cef03d7a33718914191 not found: ID does not exist" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.163621 4782 scope.go:117] "RemoveContainer" containerID="5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.164016 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394"} err="failed to get container status \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": rpc error: code = NotFound desc = could not find container \"5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394\": container with ID starting with 5d4700db537e0031b1df399b735a9bf00fe44c49223324b4f87ba2a5ddaf6394 not found: ID does not exist" Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.164608 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.167769 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.167863 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.169540 4782 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.169619 4782 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.174850 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.202483 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.202933 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-metadata" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.202954 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-metadata" Feb 02 10:59:52 crc kubenswrapper[4782]: E0202 10:59:52.202980 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-log" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.202988 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-log" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.203196 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-metadata" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.203220 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" containerName="nova-metadata-log" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.204366 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.207983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.208319 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.221247 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246047 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246101 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246161 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246198 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.246263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.347872 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.347984 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.348013 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.348062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.348100 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.348590 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.352574 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.355549 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.355981 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.367319 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") pod \"nova-metadata-0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.545416 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.833440 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bada6f8-1a58-4afa-bbc5-7ad40ca5987a" path="/var/lib/kubelet/pods/1bada6f8-1a58-4afa-bbc5-7ad40ca5987a/volumes" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.951707 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.951777 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.951827 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.952478 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.952550 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0" gracePeriod=600 Feb 02 10:59:52 crc kubenswrapper[4782]: I0202 10:59:52.977874 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.007825 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d97fcdd8f-lp4zt" podUID="aeac5df4-fc17-4840-b777-4b20a71f603b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: i/o timeout" Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.100383 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0" exitCode=0 Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.100520 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0"} Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.100794 4782 scope.go:117] "RemoveContainer" containerID="723d0d966296427a3d1b5e2811fbfcf2b8df7a346539a78c1cbaf730d23723a1" Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.103301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerStarted","Data":"b659fbd9e60731442fb339dcfdf8314f4d7167c7f486099f43c6dfe96912afd1"} Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.108902 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c8598880-0557-414a-bbb1-b5d0cdce0738","Type":"ContainerStarted","Data":"da6115dcfc2a1580809cd167f7aa760cd4435d3c231e18574a892ed5f9cab1c2"} Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.109962 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 10:59:53 crc kubenswrapper[4782]: I0202 10:59:53.136436 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.13641828 podStartE2EDuration="2.13641828s" podCreationTimestamp="2026-02-02 10:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:53.133175397 +0000 UTC m=+1273.017368113" watchObservedRunningTime="2026-02-02 10:59:53.13641828 +0000 UTC m=+1273.020610996" Feb 02 10:59:54 crc kubenswrapper[4782]: I0202 10:59:54.118308 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerStarted","Data":"ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f"} Feb 02 10:59:54 crc kubenswrapper[4782]: I0202 10:59:54.120014 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerStarted","Data":"12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140"} Feb 02 10:59:54 crc kubenswrapper[4782]: I0202 10:59:54.120634 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca"} Feb 02 10:59:54 crc kubenswrapper[4782]: I0202 10:59:54.147856 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.147838583 podStartE2EDuration="2.147838583s" podCreationTimestamp="2026-02-02 10:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:54.140907184 +0000 UTC m=+1274.025099900" watchObservedRunningTime="2026-02-02 10:59:54.147838583 +0000 UTC m=+1274.032031299" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.107135 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.137457 4782 generic.go:334] "Generic (PLEG): container finished" podID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" exitCode=0 Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.137877 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e271d5c6-aeb4-4181-8712-3c80349c7900","Type":"ContainerDied","Data":"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3"} Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.137905 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e271d5c6-aeb4-4181-8712-3c80349c7900","Type":"ContainerDied","Data":"6984fb97283161100b3f2ea9d0020d436f7eec946eff63636e874e1d070b241f"} Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.137921 4782 scope.go:117] "RemoveContainer" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.138024 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.168541 4782 generic.go:334] "Generic (PLEG): container finished" podID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerID="2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744" exitCode=0 Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.168580 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerDied","Data":"2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744"} Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.208667 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.208874 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerName="kube-state-metrics" containerID="cri-o://74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" gracePeriod=30 Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.211846 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") pod \"e271d5c6-aeb4-4181-8712-3c80349c7900\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.212057 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") pod \"e271d5c6-aeb4-4181-8712-3c80349c7900\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.212094 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") pod \"e271d5c6-aeb4-4181-8712-3c80349c7900\" (UID: \"e271d5c6-aeb4-4181-8712-3c80349c7900\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.229084 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.233727 4782 scope.go:117] "RemoveContainer" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" Feb 02 10:59:55 crc kubenswrapper[4782]: E0202 10:59:55.242974 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3\": container with ID starting with d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3 not found: ID does not exist" containerID="d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.243033 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3"} err="failed to get container status \"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3\": rpc error: code = NotFound desc = could not find container \"d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3\": container with ID starting with d3184b24b5105f0e162b86501b46a067a24ac88e5fac376e51713b411638a1c3 not found: ID does not exist" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.254974 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g" (OuterVolumeSpecName: "kube-api-access-hvf5g") pod "e271d5c6-aeb4-4181-8712-3c80349c7900" (UID: "e271d5c6-aeb4-4181-8712-3c80349c7900"). InnerVolumeSpecName "kube-api-access-hvf5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.269023 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e271d5c6-aeb4-4181-8712-3c80349c7900" (UID: "e271d5c6-aeb4-4181-8712-3c80349c7900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.281470 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data" (OuterVolumeSpecName: "config-data") pod "e271d5c6-aeb4-4181-8712-3c80349c7900" (UID: "e271d5c6-aeb4-4181-8712-3c80349c7900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.314398 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvf5g\" (UniqueName: \"kubernetes.io/projected/e271d5c6-aeb4-4181-8712-3c80349c7900-kube-api-access-hvf5g\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.314427 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.314436 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e271d5c6-aeb4-4181-8712-3c80349c7900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.415376 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") pod \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.415454 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") pod \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.415619 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") pod \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.415743 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") pod \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\" (UID: \"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.416597 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs" (OuterVolumeSpecName: "logs") pod "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" (UID: "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.421015 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j" (OuterVolumeSpecName: "kube-api-access-blg7j") pod "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" (UID: "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9"). InnerVolumeSpecName "kube-api-access-blg7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.452735 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" (UID: "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.458267 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data" (OuterVolumeSpecName: "config-data") pod "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" (UID: "f9890573-cca4-4bd8-8c38-4d4e8bff9dc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.490061 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.497486 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517320 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517706 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517732 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-logs\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517744 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blg7j\" (UniqueName: \"kubernetes.io/projected/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-kube-api-access-blg7j\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.517754 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:55 crc kubenswrapper[4782]: E0202 10:59:55.518178 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518264 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" Feb 02 10:59:55 crc kubenswrapper[4782]: E0202 10:59:55.518336 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518409 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" Feb 02 10:59:55 crc kubenswrapper[4782]: E0202 10:59:55.518484 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518547 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518860 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" containerName="nova-scheduler-scheduler" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.518964 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-api" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.519054 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" containerName="nova-api-log" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.519936 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.531929 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.542008 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.619293 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.619777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.619986 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.718511 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.722689 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.722763 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.722827 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.728039 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.731132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.755327 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") pod \"nova-scheduler-0\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.824329 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") pod \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\" (UID: \"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1\") " Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.827832 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7" (OuterVolumeSpecName: "kube-api-access-b9vk7") pod "a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" (UID: "a1ccfccc-4ba0-4523-97ca-1d5b54034fd1"). InnerVolumeSpecName "kube-api-access-b9vk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.839193 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 10:59:55 crc kubenswrapper[4782]: I0202 10:59:55.926233 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9vk7\" (UniqueName: \"kubernetes.io/projected/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1-kube-api-access-b9vk7\") on node \"crc\" DevicePath \"\"" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177670 4782 generic.go:334] "Generic (PLEG): container finished" podID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerID="74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" exitCode=2 Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177826 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1","Type":"ContainerDied","Data":"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d"} Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177931 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a1ccfccc-4ba0-4523-97ca-1d5b54034fd1","Type":"ContainerDied","Data":"b5731da46b9909f62f299535fa86ed29a8dd25ea43d89f4988425d732dfa7580"} Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177950 4782 scope.go:117] "RemoveContainer" containerID="74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.177853 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.189936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9890573-cca4-4bd8-8c38-4d4e8bff9dc9","Type":"ContainerDied","Data":"259595f6180ca19619e9584218a7595adeeee90339f24e0dc184cf1c1a9dd391"} Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.189973 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.216845 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.227408 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.233711 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.233876 4782 scope.go:117] "RemoveContainer" containerID="74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" Feb 02 10:59:56 crc kubenswrapper[4782]: E0202 10:59:56.234357 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d\": container with ID starting with 74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d not found: ID does not exist" containerID="74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.234401 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d"} err="failed to get container status \"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d\": rpc error: code = NotFound desc = could not find container \"74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d\": container with ID starting with 74780bfda6cf8379cc80c9697e593a95529dcae7915afebf7ba3cff8c139be7d not found: ID does not exist" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.234425 4782 scope.go:117] "RemoveContainer" containerID="2a94a2d8a17e25a35687317e70c9699ed67417079989757b00a53f6eefa2e744" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.243733 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.251348 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: E0202 10:59:56.251797 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerName="kube-state-metrics" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.251817 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerName="kube-state-metrics" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.251975 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" containerName="kube-state-metrics" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.252616 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.256807 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.257019 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.269405 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.271190 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.275400 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.277174 4782 scope.go:117] "RemoveContainer" containerID="8b34fc0928af6681374be91b55f13deb814067009586c26b07a1ad636317c066" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.278759 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.289164 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.377051 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435585 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435679 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435715 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435740 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mbk2\" (UniqueName: \"kubernetes.io/projected/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-api-access-9mbk2\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435781 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435811 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435891 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.435914 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537362 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537582 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537720 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537755 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537777 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.537796 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mbk2\" (UniqueName: \"kubernetes.io/projected/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-api-access-9mbk2\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.538502 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.540964 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.557302 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.558911 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.559420 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.560942 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.566997 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mbk2\" (UniqueName: \"kubernetes.io/projected/6953ab25-8ddb-4ab3-b006-116f6ad534db-kube-api-access-9mbk2\") pod \"kube-state-metrics-0\" (UID: \"6953ab25-8ddb-4ab3-b006-116f6ad534db\") " pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.572601 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.583746 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") pod \"nova-api-0\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.592683 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.839106 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ccfccc-4ba0-4523-97ca-1d5b54034fd1" path="/var/lib/kubelet/pods/a1ccfccc-4ba0-4523-97ca-1d5b54034fd1/volumes" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.839713 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e271d5c6-aeb4-4181-8712-3c80349c7900" path="/var/lib/kubelet/pods/e271d5c6-aeb4-4181-8712-3c80349c7900/volumes" Feb 02 10:59:56 crc kubenswrapper[4782]: I0202 10:59:56.840288 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9890573-cca4-4bd8-8c38-4d4e8bff9dc9" path="/var/lib/kubelet/pods/f9890573-cca4-4bd8-8c38-4d4e8bff9dc9/volumes" Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.157626 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.158261 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-central-agent" containerID="cri-o://0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff" gracePeriod=30 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.158769 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="proxy-httpd" containerID="cri-o://f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98" gracePeriod=30 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.158832 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="sg-core" containerID="cri-o://548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9" gracePeriod=30 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.158891 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-notification-agent" containerID="cri-o://c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71" gracePeriod=30 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.200918 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"feecb35c-d2a4-4c9b-8f39-8145f39b332c","Type":"ContainerStarted","Data":"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5"} Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.200962 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"feecb35c-d2a4-4c9b-8f39-8145f39b332c","Type":"ContainerStarted","Data":"499d89b4d3bd8290ea6e83963b172b2e55234950c9bf0978caba78dd300a35cd"} Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.237940 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.237922454 podStartE2EDuration="2.237922454s" podCreationTimestamp="2026-02-02 10:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:57.221172844 +0000 UTC m=+1277.105365560" watchObservedRunningTime="2026-02-02 10:59:57.237922454 +0000 UTC m=+1277.122115170" Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.249178 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.307441 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 10:59:57 crc kubenswrapper[4782]: W0202 10:59:57.308210 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6124b52e_8e75_46f7_a40a_a106f60f15be.slice/crio-5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76 WatchSource:0}: Error finding container 5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76: Status 404 returned error can't find the container with id 5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76 Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.545782 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:59:57 crc kubenswrapper[4782]: I0202 10:59:57.545951 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.218676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6953ab25-8ddb-4ab3-b006-116f6ad534db","Type":"ContainerStarted","Data":"e5a85f4fdafa3c8a5d69890175c598030e16e7cc48602682b5c858af09a16882"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.218903 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6953ab25-8ddb-4ab3-b006-116f6ad534db","Type":"ContainerStarted","Data":"eac7cf780904993861e9404cd049d1f2b95033b7eaf22713161aed6c1b5e7078"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.220218 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.226326 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerStarted","Data":"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.226365 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerStarted","Data":"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.226378 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerStarted","Data":"5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233254 4782 generic.go:334] "Generic (PLEG): container finished" podID="28015087-432c-4906-8c57-406f5bf4371b" containerID="f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98" exitCode=0 Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233290 4782 generic.go:334] "Generic (PLEG): container finished" podID="28015087-432c-4906-8c57-406f5bf4371b" containerID="548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9" exitCode=2 Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233304 4782 generic.go:334] "Generic (PLEG): container finished" podID="28015087-432c-4906-8c57-406f5bf4371b" containerID="0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff" exitCode=0 Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233815 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233876 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.233890 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff"} Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.265843 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.26582332 podStartE2EDuration="2.26582332s" podCreationTimestamp="2026-02-02 10:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 10:59:58.261450275 +0000 UTC m=+1278.145642991" watchObservedRunningTime="2026-02-02 10:59:58.26582332 +0000 UTC m=+1278.150016036" Feb 02 10:59:58 crc kubenswrapper[4782]: I0202 10:59:58.268536 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.890819813 podStartE2EDuration="2.268520268s" podCreationTimestamp="2026-02-02 10:59:56 +0000 UTC" firstStartedPulling="2026-02-02 10:59:57.246338315 +0000 UTC m=+1277.130531031" lastFinishedPulling="2026-02-02 10:59:57.62403877 +0000 UTC m=+1277.508231486" observedRunningTime="2026-02-02 10:59:58.243010486 +0000 UTC m=+1278.127203222" watchObservedRunningTime="2026-02-02 10:59:58.268520268 +0000 UTC m=+1278.152712984" Feb 02 11:00:00 crc kubenswrapper[4782]: E0202 11:00:00.106427 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28015087_432c_4906_8c57_406f5bf4371b.slice/crio-c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.147712 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.149025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.165836 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.165836 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.184585 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.204513 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.204656 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.204696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.252379 4782 generic.go:334] "Generic (PLEG): container finished" podID="28015087-432c-4906-8c57-406f5bf4371b" containerID="c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71" exitCode=0 Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.252449 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71"} Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.307875 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.308492 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.308530 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.309922 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.315250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.328116 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") pod \"collect-profiles-29500500-5d8bv\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.494988 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.539854 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613251 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613705 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613732 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613797 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613879 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.613919 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.614660 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.614714 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") pod \"28015087-432c-4906-8c57-406f5bf4371b\" (UID: \"28015087-432c-4906-8c57-406f5bf4371b\") " Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.615518 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.616592 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.620166 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn" (OuterVolumeSpecName: "kube-api-access-4cksn") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "kube-api-access-4cksn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.622126 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts" (OuterVolumeSpecName: "scripts") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.660671 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719478 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719516 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cksn\" (UniqueName: \"kubernetes.io/projected/28015087-432c-4906-8c57-406f5bf4371b-kube-api-access-4cksn\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719527 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719537 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28015087-432c-4906-8c57-406f5bf4371b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.719900 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data" (OuterVolumeSpecName: "config-data") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.739292 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28015087-432c-4906-8c57-406f5bf4371b" (UID: "28015087-432c-4906-8c57-406f5bf4371b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.820709 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.820744 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28015087-432c-4906-8c57-406f5bf4371b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:00 crc kubenswrapper[4782]: I0202 11:00:00.840315 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.046173 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:00:01 crc kubenswrapper[4782]: W0202 11:00:01.058978 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ac376d_42fd_424f_a1bf_281bd9c9d31f.slice/crio-f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f WatchSource:0}: Error finding container f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f: Status 404 returned error can't find the container with id f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.264542 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28015087-432c-4906-8c57-406f5bf4371b","Type":"ContainerDied","Data":"130ac9425f514ac20eca02616de022afd5f7d855996320a48e38657ba530c248"} Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.264910 4782 scope.go:117] "RemoveContainer" containerID="f8e5c120275d7db87897ef6e18aba32d130395e2a8cbe47997aa7cbceb7c4b98" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.264564 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.269387 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" event={"ID":"62ac376d-42fd-424f-a1bf-281bd9c9d31f","Type":"ContainerStarted","Data":"a290ebd90dc2cdcb55f14cdbbbcabca2eb0ae3e2b4fabd92e76c199c11dd8634"} Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.269428 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" event={"ID":"62ac376d-42fd-424f-a1bf-281bd9c9d31f","Type":"ContainerStarted","Data":"f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f"} Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.288920 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.295721 4782 scope.go:117] "RemoveContainer" containerID="548d7f70bba91005ffd4c7ff8fe65d2cd5bbe9ed5956e0f90c12cb922dec83e9" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.299102 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.314630 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" podStartSLOduration=1.314608197 podStartE2EDuration="1.314608197s" podCreationTimestamp="2026-02-02 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:01.306496514 +0000 UTC m=+1281.190689230" watchObservedRunningTime="2026-02-02 11:00:01.314608197 +0000 UTC m=+1281.198800933" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336189 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:01 crc kubenswrapper[4782]: E0202 11:00:01.336661 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-notification-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336686 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-notification-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: E0202 11:00:01.336718 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="sg-core" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336726 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="sg-core" Feb 02 11:00:01 crc kubenswrapper[4782]: E0202 11:00:01.336739 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="proxy-httpd" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336747 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="proxy-httpd" Feb 02 11:00:01 crc kubenswrapper[4782]: E0202 11:00:01.336763 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-central-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336771 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-central-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.336976 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="sg-core" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.337000 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-notification-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.337020 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="proxy-httpd" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.337030 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="28015087-432c-4906-8c57-406f5bf4371b" containerName="ceilometer-central-agent" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.338956 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.339611 4782 scope.go:117] "RemoveContainer" containerID="c21c382369813b902b8d01bb5ca3b76271ac75826e3f8a48f08a8227ee3e7c71" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.342313 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.342427 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.342573 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.351257 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.368931 4782 scope.go:117] "RemoveContainer" containerID="0eab3e1922a169100d96f6fb597b0d5d6e2c417157ee64d6c06b97cc49cfa3ff" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.513702 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533604 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533664 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533705 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533730 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533795 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533847 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.533867 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.635166 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.637153 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.637828 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.637987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.638424 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.638796 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.639254 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.639835 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.639978 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.640354 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.646744 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.647221 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.647800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.656581 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.659265 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.659364 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") pod \"ceilometer-0\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " pod="openstack/ceilometer-0" Feb 02 11:00:01 crc kubenswrapper[4782]: I0202 11:00:01.664546 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:02 crc kubenswrapper[4782]: W0202 11:00:02.133891 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65da50be_2bcd_4dad_aaaf_cfa5587e7544.slice/crio-1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5 WatchSource:0}: Error finding container 1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5: Status 404 returned error can't find the container with id 1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5 Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.134635 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.280404 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5"} Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.282559 4782 generic.go:334] "Generic (PLEG): container finished" podID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" containerID="a290ebd90dc2cdcb55f14cdbbbcabca2eb0ae3e2b4fabd92e76c199c11dd8634" exitCode=0 Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.282596 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" event={"ID":"62ac376d-42fd-424f-a1bf-281bd9c9d31f","Type":"ContainerDied","Data":"a290ebd90dc2cdcb55f14cdbbbcabca2eb0ae3e2b4fabd92e76c199c11dd8634"} Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.545605 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.545657 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:00:02 crc kubenswrapper[4782]: I0202 11:00:02.833395 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28015087-432c-4906-8c57-406f5bf4371b" path="/var/lib/kubelet/pods/28015087-432c-4906-8c57-406f5bf4371b/volumes" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.309327 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d"} Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.563825 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.564023 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.793936 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.984749 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") pod \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.985257 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") pod \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.985440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") pod \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\" (UID: \"62ac376d-42fd-424f-a1bf-281bd9c9d31f\") " Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.987352 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume" (OuterVolumeSpecName: "config-volume") pod "62ac376d-42fd-424f-a1bf-281bd9c9d31f" (UID: "62ac376d-42fd-424f-a1bf-281bd9c9d31f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:03 crc kubenswrapper[4782]: I0202 11:00:03.997873 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr" (OuterVolumeSpecName: "kube-api-access-mvwxr") pod "62ac376d-42fd-424f-a1bf-281bd9c9d31f" (UID: "62ac376d-42fd-424f-a1bf-281bd9c9d31f"). InnerVolumeSpecName "kube-api-access-mvwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.000672 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62ac376d-42fd-424f-a1bf-281bd9c9d31f" (UID: "62ac376d-42fd-424f-a1bf-281bd9c9d31f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.087908 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62ac376d-42fd-424f-a1bf-281bd9c9d31f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.087951 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62ac376d-42fd-424f-a1bf-281bd9c9d31f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.087965 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvwxr\" (UniqueName: \"kubernetes.io/projected/62ac376d-42fd-424f-a1bf-281bd9c9d31f-kube-api-access-mvwxr\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.319051 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e"} Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.321467 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" event={"ID":"62ac376d-42fd-424f-a1bf-281bd9c9d31f","Type":"ContainerDied","Data":"f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f"} Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.321512 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ab5332b08fd255c43419e4e5b6206b41f5cd0358a770734813c91b1d464b0f" Feb 02 11:00:04 crc kubenswrapper[4782]: I0202 11:00:04.321524 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv" Feb 02 11:00:05 crc kubenswrapper[4782]: I0202 11:00:05.331835 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75"} Feb 02 11:00:05 crc kubenswrapper[4782]: I0202 11:00:05.847750 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 11:00:05 crc kubenswrapper[4782]: I0202 11:00:05.873801 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 11:00:06 crc kubenswrapper[4782]: I0202 11:00:06.374456 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 11:00:06 crc kubenswrapper[4782]: I0202 11:00:06.584400 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 11:00:06 crc kubenswrapper[4782]: I0202 11:00:06.593284 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:06 crc kubenswrapper[4782]: I0202 11:00:06.593355 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:07 crc kubenswrapper[4782]: I0202 11:00:07.675853 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:07 crc kubenswrapper[4782]: I0202 11:00:07.675911 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:08 crc kubenswrapper[4782]: I0202 11:00:08.400789 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerStarted","Data":"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01"} Feb 02 11:00:08 crc kubenswrapper[4782]: I0202 11:00:08.402561 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:00:08 crc kubenswrapper[4782]: I0202 11:00:08.427137 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.276930978 podStartE2EDuration="7.427100422s" podCreationTimestamp="2026-02-02 11:00:01 +0000 UTC" firstStartedPulling="2026-02-02 11:00:02.135566927 +0000 UTC m=+1282.019759633" lastFinishedPulling="2026-02-02 11:00:07.285736361 +0000 UTC m=+1287.169929077" observedRunningTime="2026-02-02 11:00:08.424707623 +0000 UTC m=+1288.308900349" watchObservedRunningTime="2026-02-02 11:00:08.427100422 +0000 UTC m=+1288.311293138" Feb 02 11:00:12 crc kubenswrapper[4782]: I0202 11:00:12.554381 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:00:12 crc kubenswrapper[4782]: I0202 11:00:12.555133 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:00:12 crc kubenswrapper[4782]: I0202 11:00:12.560507 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:00:12 crc kubenswrapper[4782]: I0202 11:00:12.561134 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.327660 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.449867 4782 generic.go:334] "Generic (PLEG): container finished" podID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerID="098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" exitCode=137 Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.449943 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.449986 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"042e7186-1c2e-4a12-b06e-4f99a5d78083","Type":"ContainerDied","Data":"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9"} Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.450014 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"042e7186-1c2e-4a12-b06e-4f99a5d78083","Type":"ContainerDied","Data":"b7dc98ebc02a669721b0d5719711fa47b0524703f51ae30735f586363eb204e3"} Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.450030 4782 scope.go:117] "RemoveContainer" containerID="098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.466949 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") pod \"042e7186-1c2e-4a12-b06e-4f99a5d78083\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.467072 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") pod \"042e7186-1c2e-4a12-b06e-4f99a5d78083\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.467291 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") pod \"042e7186-1c2e-4a12-b06e-4f99a5d78083\" (UID: \"042e7186-1c2e-4a12-b06e-4f99a5d78083\") " Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.485977 4782 scope.go:117] "RemoveContainer" containerID="098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.486104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq" (OuterVolumeSpecName: "kube-api-access-cm6xq") pod "042e7186-1c2e-4a12-b06e-4f99a5d78083" (UID: "042e7186-1c2e-4a12-b06e-4f99a5d78083"). InnerVolumeSpecName "kube-api-access-cm6xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:13 crc kubenswrapper[4782]: E0202 11:00:13.487098 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9\": container with ID starting with 098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9 not found: ID does not exist" containerID="098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.487143 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9"} err="failed to get container status \"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9\": rpc error: code = NotFound desc = could not find container \"098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9\": container with ID starting with 098943f45717b7bac12d9fb61d93eea860022b3432e343c574730a5e13f6b7a9 not found: ID does not exist" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.493673 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "042e7186-1c2e-4a12-b06e-4f99a5d78083" (UID: "042e7186-1c2e-4a12-b06e-4f99a5d78083"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.505250 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data" (OuterVolumeSpecName: "config-data") pod "042e7186-1c2e-4a12-b06e-4f99a5d78083" (UID: "042e7186-1c2e-4a12-b06e-4f99a5d78083"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.570110 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.570144 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042e7186-1c2e-4a12-b06e-4f99a5d78083-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.570157 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6xq\" (UniqueName: \"kubernetes.io/projected/042e7186-1c2e-4a12-b06e-4f99a5d78083-kube-api-access-cm6xq\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.794446 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.805433 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.815167 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:13 crc kubenswrapper[4782]: E0202 11:00:13.815912 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" containerName="collect-profiles" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.816014 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" containerName="collect-profiles" Feb 02 11:00:13 crc kubenswrapper[4782]: E0202 11:00:13.816119 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.816197 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.816457 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.816560 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" containerName="collect-profiles" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.818848 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.822532 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.822702 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.822917 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.829528 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976415 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976474 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nlm6\" (UniqueName: \"kubernetes.io/projected/16441e1e-4564-492e-bdce-40eb2652687a-kube-api-access-6nlm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976740 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:13 crc kubenswrapper[4782]: I0202 11:00:13.976768 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.077886 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.078160 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.078215 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nlm6\" (UniqueName: \"kubernetes.io/projected/16441e1e-4564-492e-bdce-40eb2652687a-kube-api-access-6nlm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.078270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.078286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.082250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.082814 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.083265 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.084497 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16441e1e-4564-492e-bdce-40eb2652687a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.100074 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nlm6\" (UniqueName: \"kubernetes.io/projected/16441e1e-4564-492e-bdce-40eb2652687a-kube-api-access-6nlm6\") pod \"nova-cell1-novncproxy-0\" (UID: \"16441e1e-4564-492e-bdce-40eb2652687a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.135053 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.561816 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 11:00:14 crc kubenswrapper[4782]: W0202 11:00:14.568179 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16441e1e_4564_492e_bdce_40eb2652687a.slice/crio-927fc192f5acc85cfd8e76906d896b81c61a8151404c989820e09f0e56560a47 WatchSource:0}: Error finding container 927fc192f5acc85cfd8e76906d896b81c61a8151404c989820e09f0e56560a47: Status 404 returned error can't find the container with id 927fc192f5acc85cfd8e76906d896b81c61a8151404c989820e09f0e56560a47 Feb 02 11:00:14 crc kubenswrapper[4782]: I0202 11:00:14.832899 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042e7186-1c2e-4a12-b06e-4f99a5d78083" path="/var/lib/kubelet/pods/042e7186-1c2e-4a12-b06e-4f99a5d78083/volumes" Feb 02 11:00:15 crc kubenswrapper[4782]: I0202 11:00:15.471112 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16441e1e-4564-492e-bdce-40eb2652687a","Type":"ContainerStarted","Data":"4c086beb1c634c338884796d9471267dcb8c3f5454b940b761a4fd63a110a06b"} Feb 02 11:00:15 crc kubenswrapper[4782]: I0202 11:00:15.471382 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16441e1e-4564-492e-bdce-40eb2652687a","Type":"ContainerStarted","Data":"927fc192f5acc85cfd8e76906d896b81c61a8151404c989820e09f0e56560a47"} Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.598883 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.599572 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.600325 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.600581 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.610334 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.614056 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.619809 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.619785215 podStartE2EDuration="3.619785215s" podCreationTimestamp="2026-02-02 11:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:15.501965309 +0000 UTC m=+1295.386158025" watchObservedRunningTime="2026-02-02 11:00:16.619785215 +0000 UTC m=+1296.503977931" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.856581 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.858768 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.929253 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.939833 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.939942 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.940151 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.940420 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:16 crc kubenswrapper[4782]: I0202 11:00:16.987739 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042367 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042489 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042515 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.042632 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.043711 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.044307 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.044477 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.045002 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.064800 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") pod \"dnsmasq-dns-5b856c5697-pkbw6\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.207764 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:17 crc kubenswrapper[4782]: I0202 11:00:17.809953 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:00:18 crc kubenswrapper[4782]: I0202 11:00:18.506166 4782 generic.go:334] "Generic (PLEG): container finished" podID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerID="3632ebdb9d373630e077154436a2fd0455ce319004676a7d22bf4fd22d09ccf1" exitCode=0 Feb 02 11:00:18 crc kubenswrapper[4782]: I0202 11:00:18.506334 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerDied","Data":"3632ebdb9d373630e077154436a2fd0455ce319004676a7d22bf4fd22d09ccf1"} Feb 02 11:00:18 crc kubenswrapper[4782]: I0202 11:00:18.507132 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerStarted","Data":"cd107aafb4197d3a9b8dcab601301a7574e5c0bd0413b81852d1995de35f6645"} Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.135191 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.498000 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.516694 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerStarted","Data":"9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d"} Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.516825 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" containerID="cri-o://f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.516867 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" containerID="cri-o://586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.545385 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" podStartSLOduration=3.545362877 podStartE2EDuration="3.545362877s" podCreationTimestamp="2026-02-02 11:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:19.539004755 +0000 UTC m=+1299.423197471" watchObservedRunningTime="2026-02-02 11:00:19.545362877 +0000 UTC m=+1299.429555603" Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843205 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843549 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-central-agent" containerID="cri-o://0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843595 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-notification-agent" containerID="cri-o://545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843598 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="sg-core" containerID="cri-o://871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.843704 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" containerID="cri-o://2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" gracePeriod=30 Feb 02 11:00:19 crc kubenswrapper[4782]: I0202 11:00:19.857745 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.181:3000/\": EOF" Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530218 4782 generic.go:334] "Generic (PLEG): container finished" podID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerID="2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" exitCode=0 Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530612 4782 generic.go:334] "Generic (PLEG): container finished" podID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerID="871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" exitCode=2 Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530625 4782 generic.go:334] "Generic (PLEG): container finished" podID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerID="0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" exitCode=0 Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01"} Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530718 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75"} Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.530734 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d"} Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.533743 4782 generic.go:334] "Generic (PLEG): container finished" podID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerID="f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" exitCode=143 Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.533787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerDied","Data":"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4"} Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.534030 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:20 crc kubenswrapper[4782]: I0202 11:00:20.931223 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013227 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013273 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013314 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013357 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013391 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013469 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013504 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013659 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") pod \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\" (UID: \"65da50be-2bcd-4dad-aaaf-cfa5587e7544\") " Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.013657 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.017546 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.031877 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts" (OuterVolumeSpecName: "scripts") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.037175 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs" (OuterVolumeSpecName: "kube-api-access-pgncs") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "kube-api-access-pgncs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.102762 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116100 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116138 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116150 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116160 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65da50be-2bcd-4dad-aaaf-cfa5587e7544-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.116171 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgncs\" (UniqueName: \"kubernetes.io/projected/65da50be-2bcd-4dad-aaaf-cfa5587e7544-kube-api-access-pgncs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.131092 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.195878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.214229 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data" (OuterVolumeSpecName: "config-data") pod "65da50be-2bcd-4dad-aaaf-cfa5587e7544" (UID: "65da50be-2bcd-4dad-aaaf-cfa5587e7544"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.217089 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.217115 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.217125 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da50be-2bcd-4dad-aaaf-cfa5587e7544-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.548860 4782 generic.go:334] "Generic (PLEG): container finished" podID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerID="545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" exitCode=0 Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.549888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e"} Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.549915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65da50be-2bcd-4dad-aaaf-cfa5587e7544","Type":"ContainerDied","Data":"1a14e2a1fe0a962870038a11a248f614b06beeb3efe351e24e012f8958bd54c5"} Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.549932 4782 scope.go:117] "RemoveContainer" containerID="2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.549948 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.575332 4782 scope.go:117] "RemoveContainer" containerID="871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.593745 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.615325 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.621911 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.622305 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="sg-core" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622324 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="sg-core" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.622335 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-central-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622348 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-central-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.622378 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-notification-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622387 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-notification-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.622403 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622412 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622605 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-central-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622623 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="sg-core" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622634 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="proxy-httpd" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.622666 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" containerName="ceilometer-notification-agent" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.624450 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.630575 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.632671 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.632891 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.638494 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.639077 4782 scope.go:117] "RemoveContainer" containerID="545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.674129 4782 scope.go:117] "RemoveContainer" containerID="0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.698505 4782 scope.go:117] "RemoveContainer" containerID="2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.699046 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01\": container with ID starting with 2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01 not found: ID does not exist" containerID="2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.699085 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01"} err="failed to get container status \"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01\": rpc error: code = NotFound desc = could not find container \"2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01\": container with ID starting with 2e676e174e5b0ad84912798248ee3907ca2ee2696d17c9ce59a0ba613499db01 not found: ID does not exist" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.699112 4782 scope.go:117] "RemoveContainer" containerID="871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.699832 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75\": container with ID starting with 871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75 not found: ID does not exist" containerID="871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.699862 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75"} err="failed to get container status \"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75\": rpc error: code = NotFound desc = could not find container \"871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75\": container with ID starting with 871987b493e04ee895c4b3fcea4ac6f8180213bfd71b7ee5b917cbbffc86bd75 not found: ID does not exist" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.699881 4782 scope.go:117] "RemoveContainer" containerID="545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.700292 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e\": container with ID starting with 545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e not found: ID does not exist" containerID="545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.700319 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e"} err="failed to get container status \"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e\": rpc error: code = NotFound desc = could not find container \"545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e\": container with ID starting with 545a19bc608ebe370b86df0b475f1b5e0cad10d33d9bfb30203572056ad9298e not found: ID does not exist" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.700332 4782 scope.go:117] "RemoveContainer" containerID="0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" Feb 02 11:00:21 crc kubenswrapper[4782]: E0202 11:00:21.700819 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d\": container with ID starting with 0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d not found: ID does not exist" containerID="0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.700847 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d"} err="failed to get container status \"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d\": rpc error: code = NotFound desc = could not find container \"0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d\": container with ID starting with 0868d51d1d5c37c28f3dee2c7c1c082357c57bc91bc171d8995d79cf3bfa838d not found: ID does not exist" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.729696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.729941 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.729974 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730010 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730031 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730127 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730315 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.730439 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831131 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831282 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831318 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831353 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831380 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.831415 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.832381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.832619 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.836429 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.837138 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.837225 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.838831 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.849766 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.861291 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") pod \"ceilometer-0\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " pod="openstack/ceilometer-0" Feb 02 11:00:21 crc kubenswrapper[4782]: I0202 11:00:21.960100 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:00:22 crc kubenswrapper[4782]: I0202 11:00:22.426720 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:00:22 crc kubenswrapper[4782]: W0202 11:00:22.435916 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod497f3642_7f3b_417c_aa52_2ed3ddbcac75.slice/crio-64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591 WatchSource:0}: Error finding container 64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591: Status 404 returned error can't find the container with id 64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591 Feb 02 11:00:22 crc kubenswrapper[4782]: I0202 11:00:22.444213 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:00:22 crc kubenswrapper[4782]: I0202 11:00:22.561200 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591"} Feb 02 11:00:22 crc kubenswrapper[4782]: I0202 11:00:22.831523 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65da50be-2bcd-4dad-aaaf-cfa5587e7544" path="/var/lib/kubelet/pods/65da50be-2bcd-4dad-aaaf-cfa5587e7544/volumes" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.113439 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.270500 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") pod \"6124b52e-8e75-46f7-a40a-a106f60f15be\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.270588 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") pod \"6124b52e-8e75-46f7-a40a-a106f60f15be\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.270758 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") pod \"6124b52e-8e75-46f7-a40a-a106f60f15be\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.270812 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") pod \"6124b52e-8e75-46f7-a40a-a106f60f15be\" (UID: \"6124b52e-8e75-46f7-a40a-a106f60f15be\") " Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.271169 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs" (OuterVolumeSpecName: "logs") pod "6124b52e-8e75-46f7-a40a-a106f60f15be" (UID: "6124b52e-8e75-46f7-a40a-a106f60f15be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.271578 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6124b52e-8e75-46f7-a40a-a106f60f15be-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.285920 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd" (OuterVolumeSpecName: "kube-api-access-ttgnd") pod "6124b52e-8e75-46f7-a40a-a106f60f15be" (UID: "6124b52e-8e75-46f7-a40a-a106f60f15be"). InnerVolumeSpecName "kube-api-access-ttgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.309751 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data" (OuterVolumeSpecName: "config-data") pod "6124b52e-8e75-46f7-a40a-a106f60f15be" (UID: "6124b52e-8e75-46f7-a40a-a106f60f15be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.323980 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6124b52e-8e75-46f7-a40a-a106f60f15be" (UID: "6124b52e-8e75-46f7-a40a-a106f60f15be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.380177 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttgnd\" (UniqueName: \"kubernetes.io/projected/6124b52e-8e75-46f7-a40a-a106f60f15be-kube-api-access-ttgnd\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.380220 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.380230 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6124b52e-8e75-46f7-a40a-a106f60f15be-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571057 4782 generic.go:334] "Generic (PLEG): container finished" podID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerID="586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" exitCode=0 Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571122 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerDied","Data":"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5"} Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571146 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6124b52e-8e75-46f7-a40a-a106f60f15be","Type":"ContainerDied","Data":"5da223383e51d132edb447f42b09006c474d2dbe6cc27b91396e57bb1e739e76"} Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571164 4782 scope.go:117] "RemoveContainer" containerID="586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.571186 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.577198 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607"} Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.606134 4782 scope.go:117] "RemoveContainer" containerID="f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.614719 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.626475 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.637754 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638069 4782 scope.go:117] "RemoveContainer" containerID="586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" Feb 02 11:00:23 crc kubenswrapper[4782]: E0202 11:00:23.638229 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638249 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" Feb 02 11:00:23 crc kubenswrapper[4782]: E0202 11:00:23.638285 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638293 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638499 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-api" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.638520 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" containerName="nova-api-log" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.639651 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: E0202 11:00:23.644056 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5\": container with ID starting with 586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5 not found: ID does not exist" containerID="586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.644116 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5"} err="failed to get container status \"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5\": rpc error: code = NotFound desc = could not find container \"586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5\": container with ID starting with 586fcc49265e82e11565b807b65605fae854918b22bcf5b4c686b684a29a3be5 not found: ID does not exist" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.644147 4782 scope.go:117] "RemoveContainer" containerID="f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" Feb 02 11:00:23 crc kubenswrapper[4782]: E0202 11:00:23.645308 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4\": container with ID starting with f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4 not found: ID does not exist" containerID="f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.645350 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4"} err="failed to get container status \"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4\": rpc error: code = NotFound desc = could not find container \"f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4\": container with ID starting with f8ab1e55668f923f1cff0b3a98878e6d29e7326f5926b3f0d2390bc72c0becc4 not found: ID does not exist" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.650173 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.650523 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.651140 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.653770 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687073 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687465 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687499 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687520 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.687696 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.788691 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.788943 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.789046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.789214 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.789347 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.789484 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.791310 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.793929 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.795066 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.797633 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.797741 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.806457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") pod \"nova-api-0\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " pod="openstack/nova-api-0" Feb 02 11:00:23 crc kubenswrapper[4782]: I0202 11:00:23.985025 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.135761 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.170788 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.542216 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:24 crc kubenswrapper[4782]: W0202 11:00:24.545861 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1654143f_6a4d_400a_9879_aeddb7807563.slice/crio-1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c WatchSource:0}: Error finding container 1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c: Status 404 returned error can't find the container with id 1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.587142 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerStarted","Data":"1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c"} Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.588948 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47"} Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.588971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc"} Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.612939 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.804776 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.806168 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.810983 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.811834 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.819134 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.851943 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.852028 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.852052 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.852133 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.852813 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6124b52e-8e75-46f7-a40a-a106f60f15be" path="/var/lib/kubelet/pods/6124b52e-8e75-46f7-a40a-a106f60f15be/volumes" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.953995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.954068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.954092 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.954161 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.958918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.959583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.973303 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:24 crc kubenswrapper[4782]: I0202 11:00:24.983471 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") pod \"nova-cell1-cell-mapping-lxwch\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.158112 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.621338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerStarted","Data":"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e"} Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.621697 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerStarted","Data":"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead"} Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.652748 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.652726151 podStartE2EDuration="2.652726151s" podCreationTimestamp="2026-02-02 11:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:25.649830467 +0000 UTC m=+1305.534023193" watchObservedRunningTime="2026-02-02 11:00:25.652726151 +0000 UTC m=+1305.536918867" Feb 02 11:00:25 crc kubenswrapper[4782]: W0202 11:00:25.666225 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd921bd77_679d_4722_8238_a75dc4f3b6b5.slice/crio-84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442 WatchSource:0}: Error finding container 84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442: Status 404 returned error can't find the container with id 84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442 Feb 02 11:00:25 crc kubenswrapper[4782]: I0202 11:00:25.669585 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:00:26 crc kubenswrapper[4782]: I0202 11:00:26.631672 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lxwch" event={"ID":"d921bd77-679d-4722-8238-a75dc4f3b6b5","Type":"ContainerStarted","Data":"31af4ce695c2a4475fc8775213cd57460451e43f7c30b86186b9592d2359448f"} Feb 02 11:00:26 crc kubenswrapper[4782]: I0202 11:00:26.633219 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lxwch" event={"ID":"d921bd77-679d-4722-8238-a75dc4f3b6b5","Type":"ContainerStarted","Data":"84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442"} Feb 02 11:00:26 crc kubenswrapper[4782]: I0202 11:00:26.657008 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lxwch" podStartSLOduration=2.656992169 podStartE2EDuration="2.656992169s" podCreationTimestamp="2026-02-02 11:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:26.649946627 +0000 UTC m=+1306.534139343" watchObservedRunningTime="2026-02-02 11:00:26.656992169 +0000 UTC m=+1306.541184885" Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.209215 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.269184 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.269455 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" containerID="cri-o://9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544" gracePeriod=10 Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.651773 4782 generic.go:334] "Generic (PLEG): container finished" podID="639e44fb-7faa-4907-b02e-8c985f846925" containerID="9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544" exitCode=0 Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.651877 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerDied","Data":"9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544"} Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.661963 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerStarted","Data":"f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976"} Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.662053 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.700082 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.215083405 podStartE2EDuration="6.70006176s" podCreationTimestamp="2026-02-02 11:00:21 +0000 UTC" firstStartedPulling="2026-02-02 11:00:22.443962325 +0000 UTC m=+1302.328155041" lastFinishedPulling="2026-02-02 11:00:26.92894068 +0000 UTC m=+1306.813133396" observedRunningTime="2026-02-02 11:00:27.695307974 +0000 UTC m=+1307.579500700" watchObservedRunningTime="2026-02-02 11:00:27.70006176 +0000 UTC m=+1307.584254476" Feb 02 11:00:27 crc kubenswrapper[4782]: I0202 11:00:27.855196 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016738 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016908 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016947 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.016972 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") pod \"639e44fb-7faa-4907-b02e-8c985f846925\" (UID: \"639e44fb-7faa-4907-b02e-8c985f846925\") " Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.023873 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz" (OuterVolumeSpecName: "kube-api-access-5rmlz") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "kube-api-access-5rmlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.084503 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.098182 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config" (OuterVolumeSpecName: "config") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.104603 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.111697 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "639e44fb-7faa-4907-b02e-8c985f846925" (UID: "639e44fb-7faa-4907-b02e-8c985f846925"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120023 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120056 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120066 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120076 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/639e44fb-7faa-4907-b02e-8c985f846925-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.120086 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rmlz\" (UniqueName: \"kubernetes.io/projected/639e44fb-7faa-4907-b02e-8c985f846925-kube-api-access-5rmlz\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.672068 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.672338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" event={"ID":"639e44fb-7faa-4907-b02e-8c985f846925","Type":"ContainerDied","Data":"40e8da6bacf81b0807d18f0e00cf0e73a4f50618c9435b4a018769c28384c37e"} Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.673238 4782 scope.go:117] "RemoveContainer" containerID="9e659836c7d1179abcc6a3d9248bc937fe2b60cc342a9cd47e1803c7cc55a544" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.694911 4782 scope.go:117] "RemoveContainer" containerID="c2eed060c399f072bbf74377e28b3c99e19fd6ac4fff9760114980a82bb5c7d6" Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.712169 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.727859 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-ztd4g"] Feb 02 11:00:28 crc kubenswrapper[4782]: I0202 11:00:28.831017 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639e44fb-7faa-4907-b02e-8c985f846925" path="/var/lib/kubelet/pods/639e44fb-7faa-4907-b02e-8c985f846925/volumes" Feb 02 11:00:31 crc kubenswrapper[4782]: I0202 11:00:31.712276 4782 generic.go:334] "Generic (PLEG): container finished" podID="d921bd77-679d-4722-8238-a75dc4f3b6b5" containerID="31af4ce695c2a4475fc8775213cd57460451e43f7c30b86186b9592d2359448f" exitCode=0 Feb 02 11:00:31 crc kubenswrapper[4782]: I0202 11:00:31.712316 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lxwch" event={"ID":"d921bd77-679d-4722-8238-a75dc4f3b6b5","Type":"ContainerDied","Data":"31af4ce695c2a4475fc8775213cd57460451e43f7c30b86186b9592d2359448f"} Feb 02 11:00:32 crc kubenswrapper[4782]: I0202 11:00:32.591019 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-566b5b7845-ztd4g" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: i/o timeout" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.075807 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.241768 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") pod \"d921bd77-679d-4722-8238-a75dc4f3b6b5\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.242555 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") pod \"d921bd77-679d-4722-8238-a75dc4f3b6b5\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.242604 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") pod \"d921bd77-679d-4722-8238-a75dc4f3b6b5\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.242731 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") pod \"d921bd77-679d-4722-8238-a75dc4f3b6b5\" (UID: \"d921bd77-679d-4722-8238-a75dc4f3b6b5\") " Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.247873 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v" (OuterVolumeSpecName: "kube-api-access-6r28v") pod "d921bd77-679d-4722-8238-a75dc4f3b6b5" (UID: "d921bd77-679d-4722-8238-a75dc4f3b6b5"). InnerVolumeSpecName "kube-api-access-6r28v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.248174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts" (OuterVolumeSpecName: "scripts") pod "d921bd77-679d-4722-8238-a75dc4f3b6b5" (UID: "d921bd77-679d-4722-8238-a75dc4f3b6b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.267808 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data" (OuterVolumeSpecName: "config-data") pod "d921bd77-679d-4722-8238-a75dc4f3b6b5" (UID: "d921bd77-679d-4722-8238-a75dc4f3b6b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.278354 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d921bd77-679d-4722-8238-a75dc4f3b6b5" (UID: "d921bd77-679d-4722-8238-a75dc4f3b6b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.345359 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.345399 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.345412 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d921bd77-679d-4722-8238-a75dc4f3b6b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.345425 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r28v\" (UniqueName: \"kubernetes.io/projected/d921bd77-679d-4722-8238-a75dc4f3b6b5-kube-api-access-6r28v\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.735368 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lxwch" event={"ID":"d921bd77-679d-4722-8238-a75dc4f3b6b5","Type":"ContainerDied","Data":"84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442"} Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.735408 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84942b643547128e4555c440451002542b7d7973d8f3829145644799317de442" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.735477 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lxwch" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.986178 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:33 crc kubenswrapper[4782]: I0202 11:00:33.986304 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.013475 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.027364 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.027634 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerName="nova-scheduler-scheduler" containerID="cri-o://78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.149663 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.150048 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" containerID="cri-o://12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.150331 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" containerID="cri-o://ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.744752 4782 generic.go:334] "Generic (PLEG): container finished" podID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerID="12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140" exitCode=143 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.744831 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerDied","Data":"12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140"} Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.745273 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" containerID="cri-o://d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.745337 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" containerID="cri-o://d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" gracePeriod=30 Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.750829 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": EOF" Feb 02 11:00:34 crc kubenswrapper[4782]: I0202 11:00:34.750849 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": EOF" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.577114 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.688897 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") pod \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.689074 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") pod \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.689134 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") pod \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\" (UID: \"feecb35c-d2a4-4c9b-8f39-8145f39b332c\") " Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.698857 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r" (OuterVolumeSpecName: "kube-api-access-wg44r") pod "feecb35c-d2a4-4c9b-8f39-8145f39b332c" (UID: "feecb35c-d2a4-4c9b-8f39-8145f39b332c"). InnerVolumeSpecName "kube-api-access-wg44r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.723911 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feecb35c-d2a4-4c9b-8f39-8145f39b332c" (UID: "feecb35c-d2a4-4c9b-8f39-8145f39b332c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.729820 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data" (OuterVolumeSpecName: "config-data") pod "feecb35c-d2a4-4c9b-8f39-8145f39b332c" (UID: "feecb35c-d2a4-4c9b-8f39-8145f39b332c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769026 4782 generic.go:334] "Generic (PLEG): container finished" podID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerID="78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" exitCode=0 Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769359 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769462 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"feecb35c-d2a4-4c9b-8f39-8145f39b332c","Type":"ContainerDied","Data":"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5"} Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769543 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"feecb35c-d2a4-4c9b-8f39-8145f39b332c","Type":"ContainerDied","Data":"499d89b4d3bd8290ea6e83963b172b2e55234950c9bf0978caba78dd300a35cd"} Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.769567 4782 scope.go:117] "RemoveContainer" containerID="78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.787611 4782 generic.go:334] "Generic (PLEG): container finished" podID="1654143f-6a4d-400a-9879-aeddb7807563" containerID="d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" exitCode=143 Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.787706 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerDied","Data":"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead"} Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.790837 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.790869 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg44r\" (UniqueName: \"kubernetes.io/projected/feecb35c-d2a4-4c9b-8f39-8145f39b332c-kube-api-access-wg44r\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.790879 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecb35c-d2a4-4c9b-8f39-8145f39b332c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.824366 4782 scope.go:117] "RemoveContainer" containerID="78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.832949 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.833159 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5\": container with ID starting with 78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5 not found: ID does not exist" containerID="78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.833211 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5"} err="failed to get container status \"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5\": rpc error: code = NotFound desc = could not find container \"78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5\": container with ID starting with 78e4453f1e8fd27bcb857ea60a86f6c25b6cd43908fdb9b96ea61079f63a39b5 not found: ID does not exist" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.841269 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.854456 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.854994 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d921bd77-679d-4722-8238-a75dc4f3b6b5" containerName="nova-manage" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.855058 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d921bd77-679d-4722-8238-a75dc4f3b6b5" containerName="nova-manage" Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.855150 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.855214 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.855275 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerName="nova-scheduler-scheduler" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.855325 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerName="nova-scheduler-scheduler" Feb 02 11:00:35 crc kubenswrapper[4782]: E0202 11:00:35.855379 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="init" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.856135 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="init" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.856425 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="639e44fb-7faa-4907-b02e-8c985f846925" containerName="dnsmasq-dns" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.856517 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d921bd77-679d-4722-8238-a75dc4f3b6b5" containerName="nova-manage" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.856582 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" containerName="nova-scheduler-scheduler" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.857233 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.859513 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.891976 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.994724 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2np\" (UniqueName: \"kubernetes.io/projected/47aff64c-0afc-4b3c-9e90-cbe926943170-kube-api-access-8b2np\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.995797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-config-data\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:35 crc kubenswrapper[4782]: I0202 11:00:35.995998 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.098043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-config-data\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.098312 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.098485 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2np\" (UniqueName: \"kubernetes.io/projected/47aff64c-0afc-4b3c-9e90-cbe926943170-kube-api-access-8b2np\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.103147 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-config-data\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.105271 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aff64c-0afc-4b3c-9e90-cbe926943170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.116699 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2np\" (UniqueName: \"kubernetes.io/projected/47aff64c-0afc-4b3c-9e90-cbe926943170-kube-api-access-8b2np\") pod \"nova-scheduler-0\" (UID: \"47aff64c-0afc-4b3c-9e90-cbe926943170\") " pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.180813 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.649442 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.798737 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47aff64c-0afc-4b3c-9e90-cbe926943170","Type":"ContainerStarted","Data":"981442e0b581c5dd5c71520353e24009e588d100fa7e01f322a93b8d6454d0c0"} Feb 02 11:00:36 crc kubenswrapper[4782]: I0202 11:00:36.833892 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feecb35c-d2a4-4c9b-8f39-8145f39b332c" path="/var/lib/kubelet/pods/feecb35c-d2a4-4c9b-8f39-8145f39b332c/volumes" Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.570125 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": read tcp 10.217.0.2:32898->10.217.0.176:8775: read: connection reset by peer" Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.570155 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.176:8775/\": read tcp 10.217.0.2:32896->10.217.0.176:8775: read: connection reset by peer" Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.810175 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47aff64c-0afc-4b3c-9e90-cbe926943170","Type":"ContainerStarted","Data":"1af5ad0efe11469a83d7e0d5e41c5bd015d9f9c013f5fb5fd6fda8e2cb52b798"} Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.815416 4782 generic.go:334] "Generic (PLEG): container finished" podID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerID="ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f" exitCode=0 Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.815454 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerDied","Data":"ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f"} Feb 02 11:00:37 crc kubenswrapper[4782]: I0202 11:00:37.829072 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.829055277 podStartE2EDuration="2.829055277s" podCreationTimestamp="2026-02-02 11:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:37.826839863 +0000 UTC m=+1317.711032579" watchObservedRunningTime="2026-02-02 11:00:37.829055277 +0000 UTC m=+1317.713247993" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.035367 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.136846 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.136998 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137063 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137116 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137153 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") pod \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\" (UID: \"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0\") " Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137458 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs" (OuterVolumeSpecName: "logs") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.137814 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.147328 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b" (OuterVolumeSpecName: "kube-api-access-wvq2b") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "kube-api-access-wvq2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.184718 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.185339 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data" (OuterVolumeSpecName: "config-data") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.213836 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" (UID: "6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.240056 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvq2b\" (UniqueName: \"kubernetes.io/projected/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-kube-api-access-wvq2b\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.240095 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.240109 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.240120 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.825704 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.831767 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0","Type":"ContainerDied","Data":"b659fbd9e60731442fb339dcfdf8314f4d7167c7f486099f43c6dfe96912afd1"} Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.831814 4782 scope.go:117] "RemoveContainer" containerID="ea761e00f637478bb8b568bb6c81afa34c3f7a4ec22a7b59cb0694a1f3b66f1f" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.866324 4782 scope.go:117] "RemoveContainer" containerID="12de97960a6c2463ee72adbbcb4dfcd44c67437415fdffd257c34188bef89140" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.883100 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.904044 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.912915 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:38 crc kubenswrapper[4782]: E0202 11:00:38.913400 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.913417 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" Feb 02 11:00:38 crc kubenswrapper[4782]: E0202 11:00:38.913429 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.913437 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.913597 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-metadata" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.913615 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" containerName="nova-metadata-log" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.914663 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.917078 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.917251 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 11:00:38 crc kubenswrapper[4782]: I0202 11:00:38.936477 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.055908 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.056319 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbrmz\" (UniqueName: \"kubernetes.io/projected/ffbaaa30-f515-494a-94af-a7a83fb44ada-kube-api-access-tbrmz\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.056365 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-config-data\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.056399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.056548 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffbaaa30-f515-494a-94af-a7a83fb44ada-logs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158246 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158326 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbrmz\" (UniqueName: \"kubernetes.io/projected/ffbaaa30-f515-494a-94af-a7a83fb44ada-kube-api-access-tbrmz\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158358 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-config-data\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158384 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158462 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffbaaa30-f515-494a-94af-a7a83fb44ada-logs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.158954 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffbaaa30-f515-494a-94af-a7a83fb44ada-logs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.162718 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.175355 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-config-data\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.177165 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffbaaa30-f515-494a-94af-a7a83fb44ada-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.185294 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbrmz\" (UniqueName: \"kubernetes.io/projected/ffbaaa30-f515-494a-94af-a7a83fb44ada-kube-api-access-tbrmz\") pod \"nova-metadata-0\" (UID: \"ffbaaa30-f515-494a-94af-a7a83fb44ada\") " pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.236077 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.675213 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 11:00:39 crc kubenswrapper[4782]: I0202 11:00:39.840897 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffbaaa30-f515-494a-94af-a7a83fb44ada","Type":"ContainerStarted","Data":"d6a1a0455927cdb8fd79ed04008c814b244bef301a5077be9e8485bc608e5379"} Feb 02 11:00:40 crc kubenswrapper[4782]: I0202 11:00:40.841052 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0" path="/var/lib/kubelet/pods/6b026c6f-1f4b-4171-a2dc-9a3b17a0a8f0/volumes" Feb 02 11:00:40 crc kubenswrapper[4782]: I0202 11:00:40.859233 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffbaaa30-f515-494a-94af-a7a83fb44ada","Type":"ContainerStarted","Data":"351eb5141db5c89db50467f9952689f66cb8650b4e10b31c0b90e4c5183d2ee8"} Feb 02 11:00:40 crc kubenswrapper[4782]: I0202 11:00:40.859299 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ffbaaa30-f515-494a-94af-a7a83fb44ada","Type":"ContainerStarted","Data":"6c16bff9bf25b5a0cd3fe3f261487e2bbd11f9af4e63d13936b683c0f26ad4a7"} Feb 02 11:00:40 crc kubenswrapper[4782]: I0202 11:00:40.892131 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.892114642 podStartE2EDuration="2.892114642s" podCreationTimestamp="2026-02-02 11:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:40.888944341 +0000 UTC m=+1320.773137057" watchObservedRunningTime="2026-02-02 11:00:40.892114642 +0000 UTC m=+1320.776307348" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.181111 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.515679 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601515 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601765 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601809 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601842 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601907 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.601946 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") pod \"1654143f-6a4d-400a-9879-aeddb7807563\" (UID: \"1654143f-6a4d-400a-9879-aeddb7807563\") " Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.602860 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs" (OuterVolumeSpecName: "logs") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.612207 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9" (OuterVolumeSpecName: "kube-api-access-8scw9") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "kube-api-access-8scw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.631282 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data" (OuterVolumeSpecName: "config-data") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.632961 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.647634 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.648999 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1654143f-6a4d-400a-9879-aeddb7807563" (UID: "1654143f-6a4d-400a-9879-aeddb7807563"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703703 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703736 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703745 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703754 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1654143f-6a4d-400a-9879-aeddb7807563-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703765 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8scw9\" (UniqueName: \"kubernetes.io/projected/1654143f-6a4d-400a-9879-aeddb7807563-kube-api-access-8scw9\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.703774 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1654143f-6a4d-400a-9879-aeddb7807563-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869338 4782 generic.go:334] "Generic (PLEG): container finished" podID="1654143f-6a4d-400a-9879-aeddb7807563" containerID="d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" exitCode=0 Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869799 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869802 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerDied","Data":"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e"} Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1654143f-6a4d-400a-9879-aeddb7807563","Type":"ContainerDied","Data":"1bbf1b70356ad1417db8ddfc5e871c0f1d62f0532842271b9f250d7cab9a701c"} Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.869959 4782 scope.go:117] "RemoveContainer" containerID="d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.895765 4782 scope.go:117] "RemoveContainer" containerID="d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.905145 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.916213 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.922749 4782 scope.go:117] "RemoveContainer" containerID="d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" Feb 02 11:00:41 crc kubenswrapper[4782]: E0202 11:00:41.925135 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e\": container with ID starting with d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e not found: ID does not exist" containerID="d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.925169 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e"} err="failed to get container status \"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e\": rpc error: code = NotFound desc = could not find container \"d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e\": container with ID starting with d70d47acfa259ce9552512a4e9675a0ff84ac81e446f881063253f9b1e1e6a6e not found: ID does not exist" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.925210 4782 scope.go:117] "RemoveContainer" containerID="d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" Feb 02 11:00:41 crc kubenswrapper[4782]: E0202 11:00:41.925625 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead\": container with ID starting with d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead not found: ID does not exist" containerID="d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.925671 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead"} err="failed to get container status \"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead\": rpc error: code = NotFound desc = could not find container \"d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead\": container with ID starting with d52bae39a14dabc04cc5275e974bb2daff531c99bf25ba1e69a69291dc498ead not found: ID does not exist" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.932426 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:41 crc kubenswrapper[4782]: E0202 11:00:41.932862 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.932893 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" Feb 02 11:00:41 crc kubenswrapper[4782]: E0202 11:00:41.932916 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.932925 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.933735 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-log" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.938455 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1654143f-6a4d-400a-9879-aeddb7807563" containerName="nova-api-api" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.940377 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.948159 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.948289 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.948597 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 11:00:41 crc kubenswrapper[4782]: I0202 11:00:41.965848 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.007945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-config-data\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.007999 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3797650-67c5-417c-9b38-52a581a6bbd3-logs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.008024 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg7wl\" (UniqueName: \"kubernetes.io/projected/c3797650-67c5-417c-9b38-52a581a6bbd3-kube-api-access-tg7wl\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.008066 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.008091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.008117 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.109524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-config-data\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.109970 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3797650-67c5-417c-9b38-52a581a6bbd3-logs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.109995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg7wl\" (UniqueName: \"kubernetes.io/projected/c3797650-67c5-417c-9b38-52a581a6bbd3-kube-api-access-tg7wl\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.110029 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.110061 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.110086 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.110507 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3797650-67c5-417c-9b38-52a581a6bbd3-logs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.117209 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.118023 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-public-tls-certs\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.118677 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-config-data\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.118878 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3797650-67c5-417c-9b38-52a581a6bbd3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.132386 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg7wl\" (UniqueName: \"kubernetes.io/projected/c3797650-67c5-417c-9b38-52a581a6bbd3-kube-api-access-tg7wl\") pod \"nova-api-0\" (UID: \"c3797650-67c5-417c-9b38-52a581a6bbd3\") " pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.281359 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.770344 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.832216 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1654143f-6a4d-400a-9879-aeddb7807563" path="/var/lib/kubelet/pods/1654143f-6a4d-400a-9879-aeddb7807563/volumes" Feb 02 11:00:42 crc kubenswrapper[4782]: I0202 11:00:42.880348 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3797650-67c5-417c-9b38-52a581a6bbd3","Type":"ContainerStarted","Data":"6450d74b86443cb9709b6f730ffb3efeca50f058ab740f98f73d44bc60828a54"} Feb 02 11:00:43 crc kubenswrapper[4782]: I0202 11:00:43.890054 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3797650-67c5-417c-9b38-52a581a6bbd3","Type":"ContainerStarted","Data":"c90cfd6022005ab700f8909532a9d49ab89f1d3cb4cd119bd69548e74e6a831b"} Feb 02 11:00:43 crc kubenswrapper[4782]: I0202 11:00:43.890439 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3797650-67c5-417c-9b38-52a581a6bbd3","Type":"ContainerStarted","Data":"4f8bf525f8073b8f6e5a8de4fd4bb78495effe3a7f677b1c9da0997e98931f05"} Feb 02 11:00:43 crc kubenswrapper[4782]: I0202 11:00:43.911591 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.911569906 podStartE2EDuration="2.911569906s" podCreationTimestamp="2026-02-02 11:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:00:43.907792347 +0000 UTC m=+1323.791985073" watchObservedRunningTime="2026-02-02 11:00:43.911569906 +0000 UTC m=+1323.795762622" Feb 02 11:00:44 crc kubenswrapper[4782]: I0202 11:00:44.236741 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 11:00:44 crc kubenswrapper[4782]: I0202 11:00:44.236787 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 11:00:46 crc kubenswrapper[4782]: I0202 11:00:46.180967 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 11:00:46 crc kubenswrapper[4782]: I0202 11:00:46.206760 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 11:00:46 crc kubenswrapper[4782]: I0202 11:00:46.942440 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 11:00:49 crc kubenswrapper[4782]: I0202 11:00:49.236867 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:00:49 crc kubenswrapper[4782]: I0202 11:00:49.237213 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 11:00:50 crc kubenswrapper[4782]: I0202 11:00:50.251845 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ffbaaa30-f515-494a-94af-a7a83fb44ada" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:50 crc kubenswrapper[4782]: I0202 11:00:50.251874 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ffbaaa30-f515-494a-94af-a7a83fb44ada" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:51 crc kubenswrapper[4782]: I0202 11:00:51.974431 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 11:00:52 crc kubenswrapper[4782]: I0202 11:00:52.282767 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:52 crc kubenswrapper[4782]: I0202 11:00:52.283299 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 11:00:53 crc kubenswrapper[4782]: I0202 11:00:53.297865 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3797650-67c5-417c-9b38-52a581a6bbd3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:53 crc kubenswrapper[4782]: I0202 11:00:53.297928 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3797650-67c5-417c-9b38-52a581a6bbd3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:00:59 crc kubenswrapper[4782]: I0202 11:00:59.242050 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:00:59 crc kubenswrapper[4782]: I0202 11:00:59.242635 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 11:00:59 crc kubenswrapper[4782]: I0202 11:00:59.249430 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:00:59 crc kubenswrapper[4782]: I0202 11:00:59.250402 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.163322 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500501-wcsmz"] Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.165143 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.171306 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500501-wcsmz"] Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.263831 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.263922 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.263982 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.264166 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.365735 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.366031 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.366175 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.366915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.372183 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.372359 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.381174 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.385515 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") pod \"keystone-cron-29500501-wcsmz\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.494599 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:00 crc kubenswrapper[4782]: I0202 11:01:00.950621 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500501-wcsmz"] Feb 02 11:01:01 crc kubenswrapper[4782]: I0202 11:01:01.025132 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-wcsmz" event={"ID":"9e752213-09b8-4c8e-a5b6-9cfbf9cea168","Type":"ContainerStarted","Data":"7e03e111d2613782d02b9f6f81cddd918da59c7a09ed0861cd79bccaf1bd7406"} Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.034347 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-wcsmz" event={"ID":"9e752213-09b8-4c8e-a5b6-9cfbf9cea168","Type":"ContainerStarted","Data":"87f6259d01839c61f9982bb1cdc9bafb3471a9d3bf6c80349740746cc49506e7"} Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.057527 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500501-wcsmz" podStartSLOduration=2.057501714 podStartE2EDuration="2.057501714s" podCreationTimestamp="2026-02-02 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:02.055595989 +0000 UTC m=+1341.939788705" watchObservedRunningTime="2026-02-02 11:01:02.057501714 +0000 UTC m=+1341.941694450" Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.291299 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.292003 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.292273 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 11:01:02 crc kubenswrapper[4782]: I0202 11:01:02.297430 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:01:03 crc kubenswrapper[4782]: I0202 11:01:03.044970 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 11:01:03 crc kubenswrapper[4782]: I0202 11:01:03.051920 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 11:01:04 crc kubenswrapper[4782]: I0202 11:01:04.054451 4782 generic.go:334] "Generic (PLEG): container finished" podID="9e752213-09b8-4c8e-a5b6-9cfbf9cea168" containerID="87f6259d01839c61f9982bb1cdc9bafb3471a9d3bf6c80349740746cc49506e7" exitCode=0 Feb 02 11:01:04 crc kubenswrapper[4782]: I0202 11:01:04.054545 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-wcsmz" event={"ID":"9e752213-09b8-4c8e-a5b6-9cfbf9cea168","Type":"ContainerDied","Data":"87f6259d01839c61f9982bb1cdc9bafb3471a9d3bf6c80349740746cc49506e7"} Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.357349 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.455222 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") pod \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.455300 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") pod \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.455383 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") pod \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.455424 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") pod \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\" (UID: \"9e752213-09b8-4c8e-a5b6-9cfbf9cea168\") " Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.461625 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9e752213-09b8-4c8e-a5b6-9cfbf9cea168" (UID: "9e752213-09b8-4c8e-a5b6-9cfbf9cea168"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.461868 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9" (OuterVolumeSpecName: "kube-api-access-fn6m9") pod "9e752213-09b8-4c8e-a5b6-9cfbf9cea168" (UID: "9e752213-09b8-4c8e-a5b6-9cfbf9cea168"). InnerVolumeSpecName "kube-api-access-fn6m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.488886 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e752213-09b8-4c8e-a5b6-9cfbf9cea168" (UID: "9e752213-09b8-4c8e-a5b6-9cfbf9cea168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.509058 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data" (OuterVolumeSpecName: "config-data") pod "9e752213-09b8-4c8e-a5b6-9cfbf9cea168" (UID: "9e752213-09b8-4c8e-a5b6-9cfbf9cea168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.557935 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.557979 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.557992 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn6m9\" (UniqueName: \"kubernetes.io/projected/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-kube-api-access-fn6m9\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:05 crc kubenswrapper[4782]: I0202 11:01:05.558007 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e752213-09b8-4c8e-a5b6-9cfbf9cea168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:06 crc kubenswrapper[4782]: I0202 11:01:06.080083 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500501-wcsmz" event={"ID":"9e752213-09b8-4c8e-a5b6-9cfbf9cea168","Type":"ContainerDied","Data":"7e03e111d2613782d02b9f6f81cddd918da59c7a09ed0861cd79bccaf1bd7406"} Feb 02 11:01:06 crc kubenswrapper[4782]: I0202 11:01:06.080363 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e03e111d2613782d02b9f6f81cddd918da59c7a09ed0861cd79bccaf1bd7406" Feb 02 11:01:06 crc kubenswrapper[4782]: I0202 11:01:06.080136 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500501-wcsmz" Feb 02 11:01:11 crc kubenswrapper[4782]: I0202 11:01:11.143914 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:12 crc kubenswrapper[4782]: I0202 11:01:12.266025 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:16 crc kubenswrapper[4782]: I0202 11:01:16.634769 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" containerID="cri-o://8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7" gracePeriod=604795 Feb 02 11:01:17 crc kubenswrapper[4782]: I0202 11:01:17.095734 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" containerID="cri-o://1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" gracePeriod=604796 Feb 02 11:01:20 crc kubenswrapper[4782]: I0202 11:01:20.701462 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Feb 02 11:01:21 crc kubenswrapper[4782]: I0202 11:01:21.106782 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.216281 4782 generic.go:334] "Generic (PLEG): container finished" podID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerID="8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7" exitCode=0 Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.216718 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerDied","Data":"8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7"} Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.347924 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497081 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497149 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497172 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497191 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497232 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497324 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497348 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497385 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497494 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497517 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.497539 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\" (UID: \"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.498606 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.507032 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.507243 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.518398 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.519369 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.523730 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.524877 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77" (OuterVolumeSpecName: "kube-api-access-j8b77") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "kube-api-access-j8b77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.542210 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info" (OuterVolumeSpecName: "pod-info") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602312 4782 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602347 4782 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602358 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602366 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602388 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602401 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8b77\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-kube-api-access-j8b77\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602412 4782 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.602423 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.606418 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data" (OuterVolumeSpecName: "config-data") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.669893 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf" (OuterVolumeSpecName: "server-conf") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.704480 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.704514 4782 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.710932 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.748479 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" (UID: "e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.748492 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.807748 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.807786 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909117 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909430 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909508 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909553 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909576 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909609 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909627 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909672 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909730 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.909753 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") pod \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\" (UID: \"02fc338c-2f8c-4e17-8d5f-7a919f4237a2\") " Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.910202 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.910664 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.911205 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.929688 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.929901 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info" (OuterVolumeSpecName: "pod-info") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.930456 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.933956 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.944935 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx" (OuterVolumeSpecName: "kube-api-access-vp4jx") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "kube-api-access-vp4jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:23 crc kubenswrapper[4782]: I0202 11:01:23.951573 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data" (OuterVolumeSpecName: "config-data") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.006472 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf" (OuterVolumeSpecName: "server-conf") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011886 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011934 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011954 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011968 4782 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011981 4782 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.011990 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.012000 4782 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.012026 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.012040 4782 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.012050 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4jx\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-kube-api-access-vp4jx\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.041485 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.087837 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "02fc338c-2f8c-4e17-8d5f-7a919f4237a2" (UID: "02fc338c-2f8c-4e17-8d5f-7a919f4237a2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.114860 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.114898 4782 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02fc338c-2f8c-4e17-8d5f-7a919f4237a2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.226665 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.226681 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9","Type":"ContainerDied","Data":"f5e9c30a1317f44fa0c6c47fbf157d56fc7c3351c9ee3750a55dc1c7bb0afa43"} Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.226725 4782 scope.go:117] "RemoveContainer" containerID="8604d1c3c048376b925140bf95b36fa52a1c0e9474ad6d8f17f938507dee28c7" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.229839 4782 generic.go:334] "Generic (PLEG): container finished" podID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerID="1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" exitCode=0 Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.229930 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerDied","Data":"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7"} Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.230060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"02fc338c-2f8c-4e17-8d5f-7a919f4237a2","Type":"ContainerDied","Data":"541f3f929dc2bd6facceff225b79637fd9688a5a59dfd43d26c677249a42c37f"} Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.229943 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.281297 4782 scope.go:117] "RemoveContainer" containerID="4b22530b4335201f0edeaaeb102aa0e0c1fe781965be9a91cd8a38308cd04cdb" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.324554 4782 scope.go:117] "RemoveContainer" containerID="1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.327803 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.353842 4782 scope.go:117] "RemoveContainer" containerID="391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.357873 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.367759 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377359 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.377915 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="setup-container" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377931 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="setup-container" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.377945 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e752213-09b8-4c8e-a5b6-9cfbf9cea168" containerName="keystone-cron" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377953 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e752213-09b8-4c8e-a5b6-9cfbf9cea168" containerName="keystone-cron" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.377965 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377973 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.377989 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="setup-container" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.377997 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="setup-container" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.378015 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.378022 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.378231 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.378256 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e752213-09b8-4c8e-a5b6-9cfbf9cea168" containerName="keystone-cron" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.378270 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" containerName="rabbitmq" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.379489 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.380831 4782 scope.go:117] "RemoveContainer" containerID="1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.383688 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7\": container with ID starting with 1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7 not found: ID does not exist" containerID="1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.383743 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7"} err="failed to get container status \"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7\": rpc error: code = NotFound desc = could not find container \"1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7\": container with ID starting with 1ee82a85497580a923b17b361c957e0e8638a120e7df5248f943224523148dd7 not found: ID does not exist" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.383776 4782 scope.go:117] "RemoveContainer" containerID="391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.384330 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.384546 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.384759 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 11:01:24 crc kubenswrapper[4782]: E0202 11:01:24.385198 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb\": container with ID starting with 391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb not found: ID does not exist" containerID="391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.385229 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb"} err="failed to get container status \"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb\": rpc error: code = NotFound desc = could not find container \"391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb\": container with ID starting with 391b7b9a21ec8dc296a8482b1cb6f12d695c7d443dbc6915daefa5e4abc60ecb not found: ID does not exist" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.387395 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.389076 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.389112 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fsk8v" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.389717 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.389915 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.397525 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.406447 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.408027 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.413663 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.413957 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.414161 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.420257 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.420371 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.420398 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.423986 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l8s6k" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.428902 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.525695 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.525801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjlqq\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-kube-api-access-xjlqq\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.525850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526234 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526316 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5c627ac-51a8-46a5-9ccd-62072de19909-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526536 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5c627ac-51a8-46a5-9ccd-62072de19909-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526588 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526678 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526699 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526867 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.526954 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527008 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527081 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527145 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527183 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527267 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527372 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gcnz\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-kube-api-access-6gcnz\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.527478 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.629268 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gcnz\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-kube-api-access-6gcnz\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.629657 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.629675 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.629707 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630480 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjlqq\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-kube-api-access-xjlqq\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630598 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630660 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630688 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5c627ac-51a8-46a5-9ccd-62072de19909-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630721 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630734 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630828 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5c627ac-51a8-46a5-9ccd-62072de19909-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630862 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630907 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630927 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630953 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.630999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.631029 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.631105 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.631411 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-config-data\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.632017 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.632280 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.632326 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.632785 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.633748 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.633851 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5c627ac-51a8-46a5-9ccd-62072de19909-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634023 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634092 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634127 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634175 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.634199 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.635134 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.639310 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.643613 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5c627ac-51a8-46a5-9ccd-62072de19909-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.648151 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.648724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.649218 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.649226 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5c627ac-51a8-46a5-9ccd-62072de19909-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.652267 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gcnz\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-kube-api-access-6gcnz\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.652926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.654899 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.655240 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjlqq\" (UniqueName: \"kubernetes.io/projected/b5c627ac-51a8-46a5-9ccd-62072de19909-kube-api-access-xjlqq\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.656250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d450a8e-fd5c-40fe-a4ff-ab265dab04df-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.682863 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d450a8e-fd5c-40fe-a4ff-ab265dab04df\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.683524 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"b5c627ac-51a8-46a5-9ccd-62072de19909\") " pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.708437 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.741444 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.855243 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02fc338c-2f8c-4e17-8d5f-7a919f4237a2" path="/var/lib/kubelet/pods/02fc338c-2f8c-4e17-8d5f-7a919f4237a2/volumes" Feb 02 11:01:24 crc kubenswrapper[4782]: I0202 11:01:24.865103 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9" path="/var/lib/kubelet/pods/e326d5b8-cced-4bdd-858a-3d5b7f8dd2d9/volumes" Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.223556 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 11:01:25 crc kubenswrapper[4782]: W0202 11:01:25.236275 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d450a8e_fd5c_40fe_a4ff_ab265dab04df.slice/crio-08f5eec9f9bf335c4b09a95de088e615af6e87580ccdc2b4af3050069d2e3f98 WatchSource:0}: Error finding container 08f5eec9f9bf335c4b09a95de088e615af6e87580ccdc2b4af3050069d2e3f98: Status 404 returned error can't find the container with id 08f5eec9f9bf335c4b09a95de088e615af6e87580ccdc2b4af3050069d2e3f98 Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.324613 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 11:01:25 crc kubenswrapper[4782]: W0202 11:01:25.343003 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5c627ac_51a8_46a5_9ccd_62072de19909.slice/crio-528fcacb1809f7190a566f9367576b4aa05ec67284d891586d4a8decce1d2b76 WatchSource:0}: Error finding container 528fcacb1809f7190a566f9367576b4aa05ec67284d891586d4a8decce1d2b76: Status 404 returned error can't find the container with id 528fcacb1809f7190a566f9367576b4aa05ec67284d891586d4a8decce1d2b76 Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.964319 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.972876 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:25 crc kubenswrapper[4782]: I0202 11:01:25.980947 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057631 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057735 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057777 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057802 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057818 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.057905 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.115851 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.159513 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160251 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160337 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160410 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.160558 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.161875 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.162454 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.162687 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.163001 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.163410 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.258615 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5c627ac-51a8-46a5-9ccd-62072de19909","Type":"ContainerStarted","Data":"528fcacb1809f7190a566f9367576b4aa05ec67284d891586d4a8decce1d2b76"} Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.262612 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d450a8e-fd5c-40fe-a4ff-ab265dab04df","Type":"ContainerStarted","Data":"08f5eec9f9bf335c4b09a95de088e615af6e87580ccdc2b4af3050069d2e3f98"} Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.367900 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") pod \"dnsmasq-dns-6447ccbd8f-xx6pn\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:26 crc kubenswrapper[4782]: I0202 11:01:26.605849 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.122963 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.286491 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerStarted","Data":"4f96a694a6c0c378d5195b8d4732411eb5a8003c68c5e227f0522f0850f13d8c"} Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.298672 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5c627ac-51a8-46a5-9ccd-62072de19909","Type":"ContainerStarted","Data":"b21fa9690961d9343a49a1e5b91cb13364053d8ff62ae86442d552092e66f129"} Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.301077 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d450a8e-fd5c-40fe-a4ff-ab265dab04df","Type":"ContainerStarted","Data":"f83420861b6699b6337be09f1f27e658dd2355b35fe71a8cacee58949d08a256"} Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.312230 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.313344 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.315279 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.316277 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.316896 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.317456 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.441479 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.441563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.442198 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.442615 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.468020 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.544363 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.544428 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.544524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.544612 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.559815 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.560857 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.561864 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.567528 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:27 crc kubenswrapper[4782]: I0202 11:01:27.628679 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:28 crc kubenswrapper[4782]: I0202 11:01:28.166679 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:01:28 crc kubenswrapper[4782]: I0202 11:01:28.313845 4782 generic.go:334] "Generic (PLEG): container finished" podID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerID="64a4e00006dcb48d8a2c4ca10d2313323ebb2c4f6e7fed9822224da02b26dc5e" exitCode=0 Feb 02 11:01:28 crc kubenswrapper[4782]: I0202 11:01:28.313926 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerDied","Data":"64a4e00006dcb48d8a2c4ca10d2313323ebb2c4f6e7fed9822224da02b26dc5e"} Feb 02 11:01:28 crc kubenswrapper[4782]: I0202 11:01:28.318177 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" event={"ID":"99553aeb-f0fe-47e8-9d2a-64f4b49be76c","Type":"ContainerStarted","Data":"666456605afd38f240297301ca5448a11381e94c4ea47bb87e5894af3411f5d0"} Feb 02 11:01:29 crc kubenswrapper[4782]: I0202 11:01:29.331300 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerStarted","Data":"72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da"} Feb 02 11:01:29 crc kubenswrapper[4782]: I0202 11:01:29.332818 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:29 crc kubenswrapper[4782]: I0202 11:01:29.357206 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" podStartSLOduration=4.357180861 podStartE2EDuration="4.357180861s" podCreationTimestamp="2026-02-02 11:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:29.349813939 +0000 UTC m=+1369.234006655" watchObservedRunningTime="2026-02-02 11:01:29.357180861 +0000 UTC m=+1369.241373577" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.606839 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.677746 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.680387 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" containerID="cri-o://9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d" gracePeriod=10 Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.845976 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.860363 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.860470 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.924836 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.924898 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.924992 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.925010 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.925226 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:36 crc kubenswrapper[4782]: I0202 11:01:36.925494 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.027845 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.027899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.027955 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.027970 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028004 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028056 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028918 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028939 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.028978 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.029059 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.029180 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.055164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") pod \"dnsmasq-dns-79794c8ddf-x6sht\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.203108 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.209084 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.183:5353: connect: connection refused" Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.421422 4782 generic.go:334] "Generic (PLEG): container finished" podID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerID="9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d" exitCode=0 Feb 02 11:01:37 crc kubenswrapper[4782]: I0202 11:01:37.421474 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerDied","Data":"9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d"} Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.593730 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680184 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680308 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680348 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680371 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.680476 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") pod \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\" (UID: \"3e601661-fbc5-4fee-b3fb-456f6edc48f4\") " Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.685404 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq" (OuterVolumeSpecName: "kube-api-access-kdkwq") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "kube-api-access-kdkwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.728321 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.731144 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config" (OuterVolumeSpecName: "config") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.750539 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.752433 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e601661-fbc5-4fee-b3fb-456f6edc48f4" (UID: "3e601661-fbc5-4fee-b3fb-456f6edc48f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782775 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782836 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdkwq\" (UniqueName: \"kubernetes.io/projected/3e601661-fbc5-4fee-b3fb-456f6edc48f4-kube-api-access-kdkwq\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782849 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782858 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.782867 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e601661-fbc5-4fee-b3fb-456f6edc48f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:39 crc kubenswrapper[4782]: I0202 11:01:39.815680 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:01:39 crc kubenswrapper[4782]: W0202 11:01:39.816228 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b42d8a9_18c7_4a14_86b0_ab5fd02a39d1.slice/crio-65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca WatchSource:0}: Error finding container 65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca: Status 404 returned error can't find the container with id 65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.453065 4782 generic.go:334] "Generic (PLEG): container finished" podID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerID="6f20530bb72c77a28a0b759dfecb8abeba2d4c4c9ec2b1e203807cb88c440c27" exitCode=0 Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.453158 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerDied","Data":"6f20530bb72c77a28a0b759dfecb8abeba2d4c4c9ec2b1e203807cb88c440c27"} Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.453413 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerStarted","Data":"65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca"} Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.456623 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" event={"ID":"3e601661-fbc5-4fee-b3fb-456f6edc48f4","Type":"ContainerDied","Data":"cd107aafb4197d3a9b8dcab601301a7574e5c0bd0413b81852d1995de35f6645"} Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.456717 4782 scope.go:117] "RemoveContainer" containerID="9904d146dfcfd7dad397e5d6886fa15b96c9becbf2f18f528f0d3f3a41ce062d" Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.456834 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-pkbw6" Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.462511 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" event={"ID":"99553aeb-f0fe-47e8-9d2a-64f4b49be76c","Type":"ContainerStarted","Data":"1918681cf715e6155198ca866454b6e4a0c53baf344cbc2db5089e48edd2cc36"} Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.509589 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" podStartSLOduration=2.355003128 podStartE2EDuration="13.509570041s" podCreationTimestamp="2026-02-02 11:01:27 +0000 UTC" firstStartedPulling="2026-02-02 11:01:28.172474976 +0000 UTC m=+1368.056667692" lastFinishedPulling="2026-02-02 11:01:39.327041889 +0000 UTC m=+1379.211234605" observedRunningTime="2026-02-02 11:01:40.502751695 +0000 UTC m=+1380.386944411" watchObservedRunningTime="2026-02-02 11:01:40.509570041 +0000 UTC m=+1380.393762757" Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.578491 4782 scope.go:117] "RemoveContainer" containerID="3632ebdb9d373630e077154436a2fd0455ce319004676a7d22bf4fd22d09ccf1" Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.683413 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.697456 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-pkbw6"] Feb 02 11:01:40 crc kubenswrapper[4782]: I0202 11:01:40.833789 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" path="/var/lib/kubelet/pods/3e601661-fbc5-4fee-b3fb-456f6edc48f4/volumes" Feb 02 11:01:41 crc kubenswrapper[4782]: I0202 11:01:41.472005 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerStarted","Data":"699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224"} Feb 02 11:01:41 crc kubenswrapper[4782]: I0202 11:01:41.472383 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:41 crc kubenswrapper[4782]: I0202 11:01:41.495435 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" podStartSLOduration=5.495412798 podStartE2EDuration="5.495412798s" podCreationTimestamp="2026-02-02 11:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:41.490109936 +0000 UTC m=+1381.374302652" watchObservedRunningTime="2026-02-02 11:01:41.495412798 +0000 UTC m=+1381.379605524" Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.204723 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.267942 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.268202 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="dnsmasq-dns" containerID="cri-o://72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da" gracePeriod=10 Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.561438 4782 generic.go:334] "Generic (PLEG): container finished" podID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerID="72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da" exitCode=0 Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.562556 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerDied","Data":"72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da"} Feb 02 11:01:47 crc kubenswrapper[4782]: I0202 11:01:47.936159 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069237 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069313 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069397 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069419 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069498 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.069536 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") pod \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\" (UID: \"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84\") " Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.074695 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz" (OuterVolumeSpecName: "kube-api-access-zfcmz") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "kube-api-access-zfcmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.133777 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config" (OuterVolumeSpecName: "config") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.135390 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.141298 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.146177 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.155336 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" (UID: "dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171583 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171623 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171649 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171658 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171668 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.171676 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfcmz\" (UniqueName: \"kubernetes.io/projected/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84-kube-api-access-zfcmz\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.572932 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" event={"ID":"dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84","Type":"ContainerDied","Data":"4f96a694a6c0c378d5195b8d4732411eb5a8003c68c5e227f0522f0850f13d8c"} Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.572988 4782 scope.go:117] "RemoveContainer" containerID="72f241797192e02c052224493112540287781032ec83d38da5767774f61f56da" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.573038 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-xx6pn" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.594548 4782 scope.go:117] "RemoveContainer" containerID="64a4e00006dcb48d8a2c4ca10d2313323ebb2c4f6e7fed9822224da02b26dc5e" Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.604872 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.626061 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-xx6pn"] Feb 02 11:01:48 crc kubenswrapper[4782]: I0202 11:01:48.832343 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" path="/var/lib/kubelet/pods/dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84/volumes" Feb 02 11:01:50 crc kubenswrapper[4782]: I0202 11:01:50.595305 4782 generic.go:334] "Generic (PLEG): container finished" podID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" containerID="1918681cf715e6155198ca866454b6e4a0c53baf344cbc2db5089e48edd2cc36" exitCode=0 Feb 02 11:01:50 crc kubenswrapper[4782]: I0202 11:01:50.595405 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" event={"ID":"99553aeb-f0fe-47e8-9d2a-64f4b49be76c","Type":"ContainerDied","Data":"1918681cf715e6155198ca866454b6e4a0c53baf344cbc2db5089e48edd2cc36"} Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.262542 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.355439 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") pod \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.355718 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") pod \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.355822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") pod \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.355917 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") pod \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\" (UID: \"99553aeb-f0fe-47e8-9d2a-64f4b49be76c\") " Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.378899 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "99553aeb-f0fe-47e8-9d2a-64f4b49be76c" (UID: "99553aeb-f0fe-47e8-9d2a-64f4b49be76c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.379030 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8" (OuterVolumeSpecName: "kube-api-access-6rdr8") pod "99553aeb-f0fe-47e8-9d2a-64f4b49be76c" (UID: "99553aeb-f0fe-47e8-9d2a-64f4b49be76c"). InnerVolumeSpecName "kube-api-access-6rdr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.416816 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory" (OuterVolumeSpecName: "inventory") pod "99553aeb-f0fe-47e8-9d2a-64f4b49be76c" (UID: "99553aeb-f0fe-47e8-9d2a-64f4b49be76c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.419130 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "99553aeb-f0fe-47e8-9d2a-64f4b49be76c" (UID: "99553aeb-f0fe-47e8-9d2a-64f4b49be76c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.457656 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.457690 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rdr8\" (UniqueName: \"kubernetes.io/projected/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-kube-api-access-6rdr8\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.457699 4782 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.457709 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99553aeb-f0fe-47e8-9d2a-64f4b49be76c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.616790 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" event={"ID":"99553aeb-f0fe-47e8-9d2a-64f4b49be76c","Type":"ContainerDied","Data":"666456605afd38f240297301ca5448a11381e94c4ea47bb87e5894af3411f5d0"} Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.617030 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="666456605afd38f240297301ca5448a11381e94c4ea47bb87e5894af3411f5d0" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.616856 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.737740 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.738446 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.738539 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.738611 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="init" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.738708 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="init" Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.738793 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="init" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.738862 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="init" Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.738941 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739004 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: E0202 11:01:52.739065 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739351 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739602 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e601661-fbc5-4fee-b3fb-456f6edc48f4" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739720 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.739789 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5c0a01-6fa0-45ed-93d7-ecb829ff0a84" containerName="dnsmasq-dns" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.740479 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.743513 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.743603 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.743975 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.744246 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.756568 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.869431 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.870741 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.871032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.871193 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.972804 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.972899 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.972986 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.973052 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.977541 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.977785 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.982058 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:52 crc kubenswrapper[4782]: I0202 11:01:52.992095 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:53 crc kubenswrapper[4782]: I0202 11:01:53.068856 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:01:53 crc kubenswrapper[4782]: I0202 11:01:53.594210 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:01:53 crc kubenswrapper[4782]: I0202 11:01:53.628039 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" event={"ID":"37960174-d26b-460f-abd9-934dee1ecc8c","Type":"ContainerStarted","Data":"9892e007cdd650bc40bcfc18f9718fe4bea24b499dbb56ceb32f38212b4a0b4c"} Feb 02 11:01:54 crc kubenswrapper[4782]: I0202 11:01:54.635519 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" event={"ID":"37960174-d26b-460f-abd9-934dee1ecc8c","Type":"ContainerStarted","Data":"47940927ded7a9aac258b8c6a3364ef69283f34e697e95ad52e93cc9f65a9e0c"} Feb 02 11:01:54 crc kubenswrapper[4782]: I0202 11:01:54.656635 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" podStartSLOduration=2.221861235 podStartE2EDuration="2.656612645s" podCreationTimestamp="2026-02-02 11:01:52 +0000 UTC" firstStartedPulling="2026-02-02 11:01:53.60081 +0000 UTC m=+1393.485002706" lastFinishedPulling="2026-02-02 11:01:54.0355614 +0000 UTC m=+1393.919754116" observedRunningTime="2026-02-02 11:01:54.647935686 +0000 UTC m=+1394.532128402" watchObservedRunningTime="2026-02-02 11:01:54.656612645 +0000 UTC m=+1394.540805361" Feb 02 11:01:58 crc kubenswrapper[4782]: I0202 11:01:58.698800 4782 generic.go:334] "Generic (PLEG): container finished" podID="8d450a8e-fd5c-40fe-a4ff-ab265dab04df" containerID="f83420861b6699b6337be09f1f27e658dd2355b35fe71a8cacee58949d08a256" exitCode=0 Feb 02 11:01:58 crc kubenswrapper[4782]: I0202 11:01:58.699399 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d450a8e-fd5c-40fe-a4ff-ab265dab04df","Type":"ContainerDied","Data":"f83420861b6699b6337be09f1f27e658dd2355b35fe71a8cacee58949d08a256"} Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.710098 4782 generic.go:334] "Generic (PLEG): container finished" podID="b5c627ac-51a8-46a5-9ccd-62072de19909" containerID="b21fa9690961d9343a49a1e5b91cb13364053d8ff62ae86442d552092e66f129" exitCode=0 Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.710183 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5c627ac-51a8-46a5-9ccd-62072de19909","Type":"ContainerDied","Data":"b21fa9690961d9343a49a1e5b91cb13364053d8ff62ae86442d552092e66f129"} Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.712138 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d450a8e-fd5c-40fe-a4ff-ab265dab04df","Type":"ContainerStarted","Data":"b4fad3233a676b9abbca18b3f365bd7065b34384e9bbcb3dccd20882b22a5d71"} Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.712566 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:01:59 crc kubenswrapper[4782]: I0202 11:01:59.791243 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.791218194 podStartE2EDuration="35.791218194s" podCreationTimestamp="2026-02-02 11:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:01:59.783096141 +0000 UTC m=+1399.667288857" watchObservedRunningTime="2026-02-02 11:01:59.791218194 +0000 UTC m=+1399.675410910" Feb 02 11:02:00 crc kubenswrapper[4782]: I0202 11:02:00.723671 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b5c627ac-51a8-46a5-9ccd-62072de19909","Type":"ContainerStarted","Data":"a8e5b10f8708ed589b76977f28fa86e036b3cb9461dd8c306639ff0cc78ff17b"} Feb 02 11:02:00 crc kubenswrapper[4782]: I0202 11:02:00.723858 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 11:02:00 crc kubenswrapper[4782]: I0202 11:02:00.746086 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.746067422 podStartE2EDuration="36.746067422s" podCreationTimestamp="2026-02-02 11:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:02:00.743040305 +0000 UTC m=+1400.627233021" watchObservedRunningTime="2026-02-02 11:02:00.746067422 +0000 UTC m=+1400.630260138" Feb 02 11:02:14 crc kubenswrapper[4782]: I0202 11:02:14.713837 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 11:02:14 crc kubenswrapper[4782]: I0202 11:02:14.744846 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 11:02:22 crc kubenswrapper[4782]: I0202 11:02:22.951301 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:02:22 crc kubenswrapper[4782]: I0202 11:02:22.952902 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:02:47 crc kubenswrapper[4782]: I0202 11:02:47.751498 4782 scope.go:117] "RemoveContainer" containerID="ce6913cbdfadb84393c08d16197643efcccd14ec7c86e1016dba2acac54b37e6" Feb 02 11:02:52 crc kubenswrapper[4782]: I0202 11:02:52.950919 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:02:52 crc kubenswrapper[4782]: I0202 11:02:52.951439 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.235928 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.239001 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.283242 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.440371 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.440494 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.440580 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.542477 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.542630 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.542679 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.542975 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.543021 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.562521 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") pod \"redhat-operators-m8c9k\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:17 crc kubenswrapper[4782]: I0202 11:03:17.564658 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:18 crc kubenswrapper[4782]: I0202 11:03:18.030906 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:18 crc kubenswrapper[4782]: I0202 11:03:18.386325 4782 generic.go:334] "Generic (PLEG): container finished" podID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerID="ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16" exitCode=0 Feb 02 11:03:18 crc kubenswrapper[4782]: I0202 11:03:18.386407 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerDied","Data":"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16"} Feb 02 11:03:18 crc kubenswrapper[4782]: I0202 11:03:18.386465 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerStarted","Data":"71b29af3479b58a4f03f268e4531acdc74483b21f89fd5cf404c553a55ac74b5"} Feb 02 11:03:19 crc kubenswrapper[4782]: I0202 11:03:19.400011 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerStarted","Data":"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742"} Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.951438 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.952010 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.952054 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.952860 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:03:22 crc kubenswrapper[4782]: I0202 11:03:22.952918 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca" gracePeriod=600 Feb 02 11:03:23 crc kubenswrapper[4782]: I0202 11:03:23.433856 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca" exitCode=0 Feb 02 11:03:23 crc kubenswrapper[4782]: I0202 11:03:23.433909 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca"} Feb 02 11:03:23 crc kubenswrapper[4782]: I0202 11:03:23.433948 4782 scope.go:117] "RemoveContainer" containerID="cc93bfcd857ff139ba103c2136bd4c7838f73ea68a2b8fc097a6c493cab92dd0" Feb 02 11:03:24 crc kubenswrapper[4782]: I0202 11:03:24.446384 4782 generic.go:334] "Generic (PLEG): container finished" podID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerID="d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742" exitCode=0 Feb 02 11:03:24 crc kubenswrapper[4782]: I0202 11:03:24.446464 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerDied","Data":"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742"} Feb 02 11:03:24 crc kubenswrapper[4782]: I0202 11:03:24.452045 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8"} Feb 02 11:03:25 crc kubenswrapper[4782]: I0202 11:03:25.467294 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerStarted","Data":"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec"} Feb 02 11:03:25 crc kubenswrapper[4782]: I0202 11:03:25.492621 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m8c9k" podStartSLOduration=1.922370742 podStartE2EDuration="8.49259919s" podCreationTimestamp="2026-02-02 11:03:17 +0000 UTC" firstStartedPulling="2026-02-02 11:03:18.388454727 +0000 UTC m=+1478.272647443" lastFinishedPulling="2026-02-02 11:03:24.958683175 +0000 UTC m=+1484.842875891" observedRunningTime="2026-02-02 11:03:25.484620571 +0000 UTC m=+1485.368813307" watchObservedRunningTime="2026-02-02 11:03:25.49259919 +0000 UTC m=+1485.376791906" Feb 02 11:03:27 crc kubenswrapper[4782]: I0202 11:03:27.594758 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:27 crc kubenswrapper[4782]: I0202 11:03:27.595139 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:28 crc kubenswrapper[4782]: I0202 11:03:28.651684 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8c9k" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" probeResult="failure" output=< Feb 02 11:03:28 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:03:28 crc kubenswrapper[4782]: > Feb 02 11:03:38 crc kubenswrapper[4782]: I0202 11:03:38.611051 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8c9k" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" probeResult="failure" output=< Feb 02 11:03:38 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:03:38 crc kubenswrapper[4782]: > Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.618691 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.666104 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.843006 4782 scope.go:117] "RemoveContainer" containerID="602ab9da4d7f46c94dc61771e2c8b8b42a379bdff5c5bea8faa66082cc751118" Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.872969 4782 scope.go:117] "RemoveContainer" containerID="d77ce47c81331449fe1e66732ce31fcd1c20618737ae71f8b83041e70b41f489" Feb 02 11:03:47 crc kubenswrapper[4782]: I0202 11:03:47.924018 4782 scope.go:117] "RemoveContainer" containerID="be4a6da8c7bc821537f4458c73cbb541469bc749b77b4d5f396a7e71bf22fd01" Feb 02 11:03:48 crc kubenswrapper[4782]: I0202 11:03:48.441100 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:48 crc kubenswrapper[4782]: I0202 11:03:48.805732 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m8c9k" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" containerID="cri-o://caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" gracePeriod=2 Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.271331 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.400714 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") pod \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.401084 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") pod \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.401136 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") pod \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\" (UID: \"bce8e421-84b9-4ee0-ad9e-2b3c2a796078\") " Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.402085 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities" (OuterVolumeSpecName: "utilities") pod "bce8e421-84b9-4ee0-ad9e-2b3c2a796078" (UID: "bce8e421-84b9-4ee0-ad9e-2b3c2a796078"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.406489 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx" (OuterVolumeSpecName: "kube-api-access-b6zxx") pod "bce8e421-84b9-4ee0-ad9e-2b3c2a796078" (UID: "bce8e421-84b9-4ee0-ad9e-2b3c2a796078"). InnerVolumeSpecName "kube-api-access-b6zxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.504296 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6zxx\" (UniqueName: \"kubernetes.io/projected/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-kube-api-access-b6zxx\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.504413 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.519883 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bce8e421-84b9-4ee0-ad9e-2b3c2a796078" (UID: "bce8e421-84b9-4ee0-ad9e-2b3c2a796078"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.605693 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce8e421-84b9-4ee0-ad9e-2b3c2a796078-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817712 4782 generic.go:334] "Generic (PLEG): container finished" podID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerID="caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" exitCode=0 Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817765 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerDied","Data":"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec"} Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817798 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8c9k" event={"ID":"bce8e421-84b9-4ee0-ad9e-2b3c2a796078","Type":"ContainerDied","Data":"71b29af3479b58a4f03f268e4531acdc74483b21f89fd5cf404c553a55ac74b5"} Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817795 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8c9k" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.817878 4782 scope.go:117] "RemoveContainer" containerID="caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.839676 4782 scope.go:117] "RemoveContainer" containerID="d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.855388 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.862865 4782 scope.go:117] "RemoveContainer" containerID="ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.866255 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m8c9k"] Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.910325 4782 scope.go:117] "RemoveContainer" containerID="caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" Feb 02 11:03:49 crc kubenswrapper[4782]: E0202 11:03:49.910842 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec\": container with ID starting with caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec not found: ID does not exist" containerID="caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.910873 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec"} err="failed to get container status \"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec\": rpc error: code = NotFound desc = could not find container \"caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec\": container with ID starting with caa47f71c94962c3730999f349623bbbb309370b473743e36757fa416481e0ec not found: ID does not exist" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.910892 4782 scope.go:117] "RemoveContainer" containerID="d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742" Feb 02 11:03:49 crc kubenswrapper[4782]: E0202 11:03:49.911503 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742\": container with ID starting with d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742 not found: ID does not exist" containerID="d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.911551 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742"} err="failed to get container status \"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742\": rpc error: code = NotFound desc = could not find container \"d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742\": container with ID starting with d9bc9d85801cbec2af1c48a5469e5af98036013aef50feb743d0dbdc5520d742 not found: ID does not exist" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.911586 4782 scope.go:117] "RemoveContainer" containerID="ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16" Feb 02 11:03:49 crc kubenswrapper[4782]: E0202 11:03:49.911929 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16\": container with ID starting with ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16 not found: ID does not exist" containerID="ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16" Feb 02 11:03:49 crc kubenswrapper[4782]: I0202 11:03:49.911956 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16"} err="failed to get container status \"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16\": rpc error: code = NotFound desc = could not find container \"ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16\": container with ID starting with ef01010eb4e8b6106d0c73bc61dfd2027e920cedf0a46ae77001a60bc7c81e16 not found: ID does not exist" Feb 02 11:03:50 crc kubenswrapper[4782]: I0202 11:03:50.832140 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" path="/var/lib/kubelet/pods/bce8e421-84b9-4ee0-ad9e-2b3c2a796078/volumes" Feb 02 11:05:13 crc kubenswrapper[4782]: I0202 11:05:13.578610 4782 generic.go:334] "Generic (PLEG): container finished" podID="37960174-d26b-460f-abd9-934dee1ecc8c" containerID="47940927ded7a9aac258b8c6a3364ef69283f34e697e95ad52e93cc9f65a9e0c" exitCode=0 Feb 02 11:05:13 crc kubenswrapper[4782]: I0202 11:05:13.578698 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" event={"ID":"37960174-d26b-460f-abd9-934dee1ecc8c","Type":"ContainerDied","Data":"47940927ded7a9aac258b8c6a3364ef69283f34e697e95ad52e93cc9f65a9e0c"} Feb 02 11:05:14 crc kubenswrapper[4782]: I0202 11:05:14.997595 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.082259 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") pod \"37960174-d26b-460f-abd9-934dee1ecc8c\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.082325 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") pod \"37960174-d26b-460f-abd9-934dee1ecc8c\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.082404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") pod \"37960174-d26b-460f-abd9-934dee1ecc8c\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.082550 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") pod \"37960174-d26b-460f-abd9-934dee1ecc8c\" (UID: \"37960174-d26b-460f-abd9-934dee1ecc8c\") " Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.087486 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4" (OuterVolumeSpecName: "kube-api-access-n5ns4") pod "37960174-d26b-460f-abd9-934dee1ecc8c" (UID: "37960174-d26b-460f-abd9-934dee1ecc8c"). InnerVolumeSpecName "kube-api-access-n5ns4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.088830 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "37960174-d26b-460f-abd9-934dee1ecc8c" (UID: "37960174-d26b-460f-abd9-934dee1ecc8c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.110800 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37960174-d26b-460f-abd9-934dee1ecc8c" (UID: "37960174-d26b-460f-abd9-934dee1ecc8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.111200 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory" (OuterVolumeSpecName: "inventory") pod "37960174-d26b-460f-abd9-934dee1ecc8c" (UID: "37960174-d26b-460f-abd9-934dee1ecc8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.184777 4782 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.184817 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.184829 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37960174-d26b-460f-abd9-934dee1ecc8c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.184838 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5ns4\" (UniqueName: \"kubernetes.io/projected/37960174-d26b-460f-abd9-934dee1ecc8c-kube-api-access-n5ns4\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.595608 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" event={"ID":"37960174-d26b-460f-abd9-934dee1ecc8c","Type":"ContainerDied","Data":"9892e007cdd650bc40bcfc18f9718fe4bea24b499dbb56ceb32f38212b4a0b4c"} Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.595859 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9892e007cdd650bc40bcfc18f9718fe4bea24b499dbb56ceb32f38212b4a0b4c" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.595840 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.682876 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:05:15 crc kubenswrapper[4782]: E0202 11:05:15.683247 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="extract-content" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683259 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="extract-content" Feb 02 11:05:15 crc kubenswrapper[4782]: E0202 11:05:15.683273 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="extract-utilities" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683279 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="extract-utilities" Feb 02 11:05:15 crc kubenswrapper[4782]: E0202 11:05:15.683296 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37960174-d26b-460f-abd9-934dee1ecc8c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683305 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="37960174-d26b-460f-abd9-934dee1ecc8c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:05:15 crc kubenswrapper[4782]: E0202 11:05:15.683323 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683329 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683482 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="37960174-d26b-460f-abd9-934dee1ecc8c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.683494 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce8e421-84b9-4ee0-ad9e-2b3c2a796078" containerName="registry-server" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.684088 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.686307 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.686522 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.686748 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.697330 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.704807 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.793224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.793348 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.793392 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.895328 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.895453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.895502 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.899145 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.900202 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:15 crc kubenswrapper[4782]: I0202 11:05:15.917557 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xz78\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:16 crc kubenswrapper[4782]: I0202 11:05:16.000945 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:05:16 crc kubenswrapper[4782]: I0202 11:05:16.539084 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:05:16 crc kubenswrapper[4782]: I0202 11:05:16.605144 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" event={"ID":"5a24fab5-51cc-4f0a-a823-c9748efd8410","Type":"ContainerStarted","Data":"be7ce3ccb69a5745054007321e53120f9506050fe2a04ddb2bd3dfef26a90754"} Feb 02 11:05:17 crc kubenswrapper[4782]: I0202 11:05:17.618834 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" event={"ID":"5a24fab5-51cc-4f0a-a823-c9748efd8410","Type":"ContainerStarted","Data":"65e9d4460cda578d85c98c8eacb6e70446a4235a9df02ce23f87a954cc50ea96"} Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.080720 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" podStartSLOduration=25.671238254 podStartE2EDuration="26.080700977s" podCreationTimestamp="2026-02-02 11:05:15 +0000 UTC" firstStartedPulling="2026-02-02 11:05:16.547993778 +0000 UTC m=+1596.432186504" lastFinishedPulling="2026-02-02 11:05:16.957456521 +0000 UTC m=+1596.841649227" observedRunningTime="2026-02-02 11:05:17.64131944 +0000 UTC m=+1597.525512156" watchObservedRunningTime="2026-02-02 11:05:41.080700977 +0000 UTC m=+1620.964893693" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.088298 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.090482 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.118221 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.164440 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.164519 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.164558 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.266301 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.266386 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.266416 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.266861 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.267228 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.284467 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") pod \"redhat-marketplace-gknrm\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:41 crc kubenswrapper[4782]: I0202 11:05:41.410176 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.080743 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.859485 4782 generic.go:334] "Generic (PLEG): container finished" podID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerID="d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531" exitCode=0 Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.859596 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerDied","Data":"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531"} Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.859848 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerStarted","Data":"9fafdbb90ddb155007c16973867b739af133dc546a62267221c20a82aadd27f2"} Feb 02 11:05:42 crc kubenswrapper[4782]: I0202 11:05:42.862472 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:05:43 crc kubenswrapper[4782]: I0202 11:05:43.870733 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerStarted","Data":"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187"} Feb 02 11:05:44 crc kubenswrapper[4782]: I0202 11:05:44.880839 4782 generic.go:334] "Generic (PLEG): container finished" podID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerID="122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187" exitCode=0 Feb 02 11:05:44 crc kubenswrapper[4782]: I0202 11:05:44.880943 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerDied","Data":"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187"} Feb 02 11:05:45 crc kubenswrapper[4782]: I0202 11:05:45.891674 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerStarted","Data":"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d"} Feb 02 11:05:45 crc kubenswrapper[4782]: I0202 11:05:45.913794 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gknrm" podStartSLOduration=2.5067260449999997 podStartE2EDuration="4.913775026s" podCreationTimestamp="2026-02-02 11:05:41 +0000 UTC" firstStartedPulling="2026-02-02 11:05:42.862173185 +0000 UTC m=+1622.746365891" lastFinishedPulling="2026-02-02 11:05:45.269222156 +0000 UTC m=+1625.153414872" observedRunningTime="2026-02-02 11:05:45.911372157 +0000 UTC m=+1625.795564883" watchObservedRunningTime="2026-02-02 11:05:45.913775026 +0000 UTC m=+1625.797967732" Feb 02 11:05:51 crc kubenswrapper[4782]: I0202 11:05:51.410700 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:51 crc kubenswrapper[4782]: I0202 11:05:51.411682 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:51 crc kubenswrapper[4782]: I0202 11:05:51.463886 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:51 crc kubenswrapper[4782]: I0202 11:05:51.988456 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:52 crc kubenswrapper[4782]: I0202 11:05:52.067381 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:52 crc kubenswrapper[4782]: I0202 11:05:52.951490 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:05:52 crc kubenswrapper[4782]: I0202 11:05:52.951544 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:05:53 crc kubenswrapper[4782]: I0202 11:05:53.953125 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gknrm" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="registry-server" containerID="cri-o://a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" gracePeriod=2 Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.442278 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.530068 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") pod \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.530111 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") pod \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.530161 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") pod \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\" (UID: \"e4d69bec-195f-4b80-95b7-8e69a4259cc7\") " Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.533653 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities" (OuterVolumeSpecName: "utilities") pod "e4d69bec-195f-4b80-95b7-8e69a4259cc7" (UID: "e4d69bec-195f-4b80-95b7-8e69a4259cc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.535839 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7" (OuterVolumeSpecName: "kube-api-access-s26m7") pod "e4d69bec-195f-4b80-95b7-8e69a4259cc7" (UID: "e4d69bec-195f-4b80-95b7-8e69a4259cc7"). InnerVolumeSpecName "kube-api-access-s26m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.553467 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4d69bec-195f-4b80-95b7-8e69a4259cc7" (UID: "e4d69bec-195f-4b80-95b7-8e69a4259cc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.632558 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.632593 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d69bec-195f-4b80-95b7-8e69a4259cc7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.632605 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s26m7\" (UniqueName: \"kubernetes.io/projected/e4d69bec-195f-4b80-95b7-8e69a4259cc7-kube-api-access-s26m7\") on node \"crc\" DevicePath \"\"" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964080 4782 generic.go:334] "Generic (PLEG): container finished" podID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerID="a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" exitCode=0 Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964121 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerDied","Data":"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d"} Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964154 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gknrm" event={"ID":"e4d69bec-195f-4b80-95b7-8e69a4259cc7","Type":"ContainerDied","Data":"9fafdbb90ddb155007c16973867b739af133dc546a62267221c20a82aadd27f2"} Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964171 4782 scope.go:117] "RemoveContainer" containerID="a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.964912 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gknrm" Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.990714 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:54 crc kubenswrapper[4782]: I0202 11:05:54.994257 4782 scope.go:117] "RemoveContainer" containerID="122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.000124 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gknrm"] Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.014259 4782 scope.go:117] "RemoveContainer" containerID="d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.055133 4782 scope.go:117] "RemoveContainer" containerID="a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" Feb 02 11:05:55 crc kubenswrapper[4782]: E0202 11:05:55.055784 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d\": container with ID starting with a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d not found: ID does not exist" containerID="a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.055846 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d"} err="failed to get container status \"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d\": rpc error: code = NotFound desc = could not find container \"a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d\": container with ID starting with a27cac3e59a75b1814744efb9a5a123845baad2ac852be800505bbe1204c108d not found: ID does not exist" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.055896 4782 scope.go:117] "RemoveContainer" containerID="122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187" Feb 02 11:05:55 crc kubenswrapper[4782]: E0202 11:05:55.056373 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187\": container with ID starting with 122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187 not found: ID does not exist" containerID="122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.056413 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187"} err="failed to get container status \"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187\": rpc error: code = NotFound desc = could not find container \"122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187\": container with ID starting with 122875593d7d45fb68fdba3a585aab08e15f8cf528d61d09ed40050cbd0d8187 not found: ID does not exist" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.056438 4782 scope.go:117] "RemoveContainer" containerID="d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531" Feb 02 11:05:55 crc kubenswrapper[4782]: E0202 11:05:55.056915 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531\": container with ID starting with d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531 not found: ID does not exist" containerID="d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531" Feb 02 11:05:55 crc kubenswrapper[4782]: I0202 11:05:55.056951 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531"} err="failed to get container status \"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531\": rpc error: code = NotFound desc = could not find container \"d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531\": container with ID starting with d41fed06f6a9e80ce0f00d888b7232c8425123e53aed653f5e19d93aadfeb531 not found: ID does not exist" Feb 02 11:05:56 crc kubenswrapper[4782]: I0202 11:05:56.835589 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" path="/var/lib/kubelet/pods/e4d69bec-195f-4b80-95b7-8e69a4259cc7/volumes" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.613168 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:02 crc kubenswrapper[4782]: E0202 11:06:02.614273 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="extract-content" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.614291 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="extract-content" Feb 02 11:06:02 crc kubenswrapper[4782]: E0202 11:06:02.614317 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="extract-utilities" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.614325 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="extract-utilities" Feb 02 11:06:02 crc kubenswrapper[4782]: E0202 11:06:02.614341 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="registry-server" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.614349 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="registry-server" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.615303 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d69bec-195f-4b80-95b7-8e69a4259cc7" containerName="registry-server" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.617029 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.627831 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.681528 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.682133 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.682299 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.786388 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.786595 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.786729 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.787181 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.788223 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.813376 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") pod \"community-operators-rxp2h\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:02 crc kubenswrapper[4782]: I0202 11:06:02.937112 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:03 crc kubenswrapper[4782]: I0202 11:06:03.400428 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:04 crc kubenswrapper[4782]: I0202 11:06:04.060432 4782 generic.go:334] "Generic (PLEG): container finished" podID="a8f72158-3325-454c-a8e2-64301e578f90" containerID="0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09" exitCode=0 Feb 02 11:06:04 crc kubenswrapper[4782]: I0202 11:06:04.060486 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerDied","Data":"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09"} Feb 02 11:06:04 crc kubenswrapper[4782]: I0202 11:06:04.060521 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerStarted","Data":"25e7c10e9704ca146ecae8522d451a174f5be6f5c2a9cbbfbede8c6d8d3ac3b8"} Feb 02 11:06:06 crc kubenswrapper[4782]: I0202 11:06:06.080537 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerStarted","Data":"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb"} Feb 02 11:06:07 crc kubenswrapper[4782]: I0202 11:06:07.092331 4782 generic.go:334] "Generic (PLEG): container finished" podID="a8f72158-3325-454c-a8e2-64301e578f90" containerID="8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb" exitCode=0 Feb 02 11:06:07 crc kubenswrapper[4782]: I0202 11:06:07.092390 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerDied","Data":"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb"} Feb 02 11:06:08 crc kubenswrapper[4782]: I0202 11:06:08.104454 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerStarted","Data":"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a"} Feb 02 11:06:08 crc kubenswrapper[4782]: I0202 11:06:08.130251 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxp2h" podStartSLOduration=2.680014686 podStartE2EDuration="6.13022871s" podCreationTimestamp="2026-02-02 11:06:02 +0000 UTC" firstStartedPulling="2026-02-02 11:06:04.063314766 +0000 UTC m=+1643.947507472" lastFinishedPulling="2026-02-02 11:06:07.51352878 +0000 UTC m=+1647.397721496" observedRunningTime="2026-02-02 11:06:08.120587283 +0000 UTC m=+1648.004780009" watchObservedRunningTime="2026-02-02 11:06:08.13022871 +0000 UTC m=+1648.014421426" Feb 02 11:06:12 crc kubenswrapper[4782]: I0202 11:06:12.937940 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:12 crc kubenswrapper[4782]: I0202 11:06:12.938427 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:12 crc kubenswrapper[4782]: I0202 11:06:12.984474 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:13 crc kubenswrapper[4782]: I0202 11:06:13.210302 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:13 crc kubenswrapper[4782]: I0202 11:06:13.282715 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.160319 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxp2h" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="registry-server" containerID="cri-o://e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" gracePeriod=2 Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.629895 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.631147 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.631763 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.657157 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731071 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") pod \"a8f72158-3325-454c-a8e2-64301e578f90\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731394 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") pod \"a8f72158-3325-454c-a8e2-64301e578f90\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731426 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") pod \"a8f72158-3325-454c-a8e2-64301e578f90\" (UID: \"a8f72158-3325-454c-a8e2-64301e578f90\") " Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731660 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731789 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.731879 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.732351 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities" (OuterVolumeSpecName: "utilities") pod "a8f72158-3325-454c-a8e2-64301e578f90" (UID: "a8f72158-3325-454c-a8e2-64301e578f90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.753703 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28" (OuterVolumeSpecName: "kube-api-access-6vn28") pod "a8f72158-3325-454c-a8e2-64301e578f90" (UID: "a8f72158-3325-454c-a8e2-64301e578f90"). InnerVolumeSpecName "kube-api-access-6vn28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.794924 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8f72158-3325-454c-a8e2-64301e578f90" (UID: "a8f72158-3325-454c-a8e2-64301e578f90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.834128 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.833623 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.834607 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.835035 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.835381 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.835955 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.836071 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vn28\" (UniqueName: \"kubernetes.io/projected/a8f72158-3325-454c-a8e2-64301e578f90-kube-api-access-6vn28\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.836166 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f72158-3325-454c-a8e2-64301e578f90-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.859190 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") pod \"certified-operators-crk2q\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:15 crc kubenswrapper[4782]: I0202 11:06:15.953140 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.174591 4782 generic.go:334] "Generic (PLEG): container finished" podID="a8f72158-3325-454c-a8e2-64301e578f90" containerID="e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" exitCode=0 Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.174839 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxp2h" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.174816 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerDied","Data":"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a"} Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.176492 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxp2h" event={"ID":"a8f72158-3325-454c-a8e2-64301e578f90","Type":"ContainerDied","Data":"25e7c10e9704ca146ecae8522d451a174f5be6f5c2a9cbbfbede8c6d8d3ac3b8"} Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.176523 4782 scope.go:117] "RemoveContainer" containerID="e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.235692 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.242524 4782 scope.go:117] "RemoveContainer" containerID="8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.242729 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxp2h"] Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.281795 4782 scope.go:117] "RemoveContainer" containerID="0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.344414 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.345527 4782 scope.go:117] "RemoveContainer" containerID="e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" Feb 02 11:06:16 crc kubenswrapper[4782]: E0202 11:06:16.348971 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a\": container with ID starting with e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a not found: ID does not exist" containerID="e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.349014 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a"} err="failed to get container status \"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a\": rpc error: code = NotFound desc = could not find container \"e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a\": container with ID starting with e8f8c21f278bba7eae234af2efe08b2d53849c5afe65a4655e72d857b2c73f0a not found: ID does not exist" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.349042 4782 scope.go:117] "RemoveContainer" containerID="8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb" Feb 02 11:06:16 crc kubenswrapper[4782]: E0202 11:06:16.349557 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb\": container with ID starting with 8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb not found: ID does not exist" containerID="8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.349586 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb"} err="failed to get container status \"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb\": rpc error: code = NotFound desc = could not find container \"8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb\": container with ID starting with 8d0fe44c1ed9a863a0224b7a213e0da13f567d2d53c6c420c4981a180bd144eb not found: ID does not exist" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.349603 4782 scope.go:117] "RemoveContainer" containerID="0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09" Feb 02 11:06:16 crc kubenswrapper[4782]: E0202 11:06:16.351087 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09\": container with ID starting with 0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09 not found: ID does not exist" containerID="0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.351118 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09"} err="failed to get container status \"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09\": rpc error: code = NotFound desc = could not find container \"0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09\": container with ID starting with 0414ec1229053ce0ea4aeb70c202aa0b2584239a3b8f883ea41c9a4b097bec09 not found: ID does not exist" Feb 02 11:06:16 crc kubenswrapper[4782]: I0202 11:06:16.832192 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f72158-3325-454c-a8e2-64301e578f90" path="/var/lib/kubelet/pods/a8f72158-3325-454c-a8e2-64301e578f90/volumes" Feb 02 11:06:17 crc kubenswrapper[4782]: I0202 11:06:17.188445 4782 generic.go:334] "Generic (PLEG): container finished" podID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerID="222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593" exitCode=0 Feb 02 11:06:17 crc kubenswrapper[4782]: I0202 11:06:17.188509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerDied","Data":"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593"} Feb 02 11:06:17 crc kubenswrapper[4782]: I0202 11:06:17.188833 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerStarted","Data":"4d2bdf4c1b73d8cd1ae616d797d6ab67314db7247425c880cb4fa4702b118dc7"} Feb 02 11:06:19 crc kubenswrapper[4782]: I0202 11:06:19.222611 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerStarted","Data":"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61"} Feb 02 11:06:20 crc kubenswrapper[4782]: I0202 11:06:20.232502 4782 generic.go:334] "Generic (PLEG): container finished" podID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerID="c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61" exitCode=0 Feb 02 11:06:20 crc kubenswrapper[4782]: I0202 11:06:20.232758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerDied","Data":"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61"} Feb 02 11:06:21 crc kubenswrapper[4782]: I0202 11:06:21.249505 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerStarted","Data":"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e"} Feb 02 11:06:21 crc kubenswrapper[4782]: I0202 11:06:21.274975 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crk2q" podStartSLOduration=2.7276333409999998 podStartE2EDuration="6.274955046s" podCreationTimestamp="2026-02-02 11:06:15 +0000 UTC" firstStartedPulling="2026-02-02 11:06:17.191037963 +0000 UTC m=+1657.075230679" lastFinishedPulling="2026-02-02 11:06:20.738359668 +0000 UTC m=+1660.622552384" observedRunningTime="2026-02-02 11:06:21.273026861 +0000 UTC m=+1661.157219577" watchObservedRunningTime="2026-02-02 11:06:21.274955046 +0000 UTC m=+1661.159147782" Feb 02 11:06:22 crc kubenswrapper[4782]: I0202 11:06:22.951786 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:06:22 crc kubenswrapper[4782]: I0202 11:06:22.952206 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:06:25 crc kubenswrapper[4782]: I0202 11:06:25.954826 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:25 crc kubenswrapper[4782]: I0202 11:06:25.955249 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:26 crc kubenswrapper[4782]: I0202 11:06:26.008242 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:26 crc kubenswrapper[4782]: I0202 11:06:26.347813 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:26 crc kubenswrapper[4782]: I0202 11:06:26.398604 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.315971 4782 generic.go:334] "Generic (PLEG): container finished" podID="5a24fab5-51cc-4f0a-a823-c9748efd8410" containerID="65e9d4460cda578d85c98c8eacb6e70446a4235a9df02ce23f87a954cc50ea96" exitCode=0 Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.316035 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" event={"ID":"5a24fab5-51cc-4f0a-a823-c9748efd8410","Type":"ContainerDied","Data":"65e9d4460cda578d85c98c8eacb6e70446a4235a9df02ce23f87a954cc50ea96"} Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.316888 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-crk2q" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="registry-server" containerID="cri-o://f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" gracePeriod=2 Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.763831 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.834132 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") pod \"16fe0977-c663-4e1a-97e3-7de4ae38df03\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.834210 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") pod \"16fe0977-c663-4e1a-97e3-7de4ae38df03\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.834903 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities" (OuterVolumeSpecName: "utilities") pod "16fe0977-c663-4e1a-97e3-7de4ae38df03" (UID: "16fe0977-c663-4e1a-97e3-7de4ae38df03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.835029 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") pod \"16fe0977-c663-4e1a-97e3-7de4ae38df03\" (UID: \"16fe0977-c663-4e1a-97e3-7de4ae38df03\") " Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.836022 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.845874 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c" (OuterVolumeSpecName: "kube-api-access-9dq2c") pod "16fe0977-c663-4e1a-97e3-7de4ae38df03" (UID: "16fe0977-c663-4e1a-97e3-7de4ae38df03"). InnerVolumeSpecName "kube-api-access-9dq2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.878725 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16fe0977-c663-4e1a-97e3-7de4ae38df03" (UID: "16fe0977-c663-4e1a-97e3-7de4ae38df03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.938095 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fe0977-c663-4e1a-97e3-7de4ae38df03-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:28 crc kubenswrapper[4782]: I0202 11:06:28.938146 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dq2c\" (UniqueName: \"kubernetes.io/projected/16fe0977-c663-4e1a-97e3-7de4ae38df03-kube-api-access-9dq2c\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.333245 4782 generic.go:334] "Generic (PLEG): container finished" podID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerID="f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" exitCode=0 Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.334194 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk2q" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.335714 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerDied","Data":"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e"} Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.335811 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk2q" event={"ID":"16fe0977-c663-4e1a-97e3-7de4ae38df03","Type":"ContainerDied","Data":"4d2bdf4c1b73d8cd1ae616d797d6ab67314db7247425c880cb4fa4702b118dc7"} Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.335836 4782 scope.go:117] "RemoveContainer" containerID="f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.414624 4782 scope.go:117] "RemoveContainer" containerID="c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.421268 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.437748 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-crk2q"] Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.490843 4782 scope.go:117] "RemoveContainer" containerID="222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.549602 4782 scope.go:117] "RemoveContainer" containerID="f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" Feb 02 11:06:29 crc kubenswrapper[4782]: E0202 11:06:29.551495 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e\": container with ID starting with f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e not found: ID does not exist" containerID="f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.551533 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e"} err="failed to get container status \"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e\": rpc error: code = NotFound desc = could not find container \"f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e\": container with ID starting with f5cb45ed1926a315ccb4bbf3ddd298c7df40239a1e8081bc33ddee23c2b9970e not found: ID does not exist" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.551574 4782 scope.go:117] "RemoveContainer" containerID="c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61" Feb 02 11:06:29 crc kubenswrapper[4782]: E0202 11:06:29.552067 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61\": container with ID starting with c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61 not found: ID does not exist" containerID="c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.552094 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61"} err="failed to get container status \"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61\": rpc error: code = NotFound desc = could not find container \"c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61\": container with ID starting with c07ec2ab3fb8ef51275c4463d6702d15bd7bb331a0318256c1dec850a769ee61 not found: ID does not exist" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.552111 4782 scope.go:117] "RemoveContainer" containerID="222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593" Feb 02 11:06:29 crc kubenswrapper[4782]: E0202 11:06:29.556372 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593\": container with ID starting with 222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593 not found: ID does not exist" containerID="222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.556410 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593"} err="failed to get container status \"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593\": rpc error: code = NotFound desc = could not find container \"222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593\": container with ID starting with 222c64410339019605c5e21195c90a0b177f4724cf97bf68b952ddaec4937593 not found: ID does not exist" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.938816 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.989868 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") pod \"5a24fab5-51cc-4f0a-a823-c9748efd8410\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.990533 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") pod \"5a24fab5-51cc-4f0a-a823-c9748efd8410\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " Feb 02 11:06:29 crc kubenswrapper[4782]: I0202 11:06:29.990723 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") pod \"5a24fab5-51cc-4f0a-a823-c9748efd8410\" (UID: \"5a24fab5-51cc-4f0a-a823-c9748efd8410\") " Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.011758 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q" (OuterVolumeSpecName: "kube-api-access-qdg7q") pod "5a24fab5-51cc-4f0a-a823-c9748efd8410" (UID: "5a24fab5-51cc-4f0a-a823-c9748efd8410"). InnerVolumeSpecName "kube-api-access-qdg7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.029411 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a24fab5-51cc-4f0a-a823-c9748efd8410" (UID: "5a24fab5-51cc-4f0a-a823-c9748efd8410"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.040534 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory" (OuterVolumeSpecName: "inventory") pod "5a24fab5-51cc-4f0a-a823-c9748efd8410" (UID: "5a24fab5-51cc-4f0a-a823-c9748efd8410"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.093420 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdg7q\" (UniqueName: \"kubernetes.io/projected/5a24fab5-51cc-4f0a-a823-c9748efd8410-kube-api-access-qdg7q\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.093460 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.093499 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a24fab5-51cc-4f0a-a823-c9748efd8410-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.344598 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.344619 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78" event={"ID":"5a24fab5-51cc-4f0a-a823-c9748efd8410","Type":"ContainerDied","Data":"be7ce3ccb69a5745054007321e53120f9506050fe2a04ddb2bd3dfef26a90754"} Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.344671 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be7ce3ccb69a5745054007321e53120f9506050fe2a04ddb2bd3dfef26a90754" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454422 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454838 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="extract-content" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454860 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="extract-content" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454876 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a24fab5-51cc-4f0a-a823-c9748efd8410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454885 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a24fab5-51cc-4f0a-a823-c9748efd8410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454905 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="extract-utilities" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454913 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="extract-utilities" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454925 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="extract-utilities" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454933 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="extract-utilities" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454947 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454957 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454973 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="extract-content" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454979 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="extract-content" Feb 02 11:06:30 crc kubenswrapper[4782]: E0202 11:06:30.454992 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.454998 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.455158 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f72158-3325-454c-a8e2-64301e578f90" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.455177 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a24fab5-51cc-4f0a-a823-c9748efd8410" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.455186 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" containerName="registry-server" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.455815 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.458460 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.458989 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.459290 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.459539 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.477203 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.515387 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.515505 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.515600 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.617562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.618001 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.618066 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.622245 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.623306 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.633754 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.825031 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:30 crc kubenswrapper[4782]: I0202 11:06:30.835339 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fe0977-c663-4e1a-97e3-7de4ae38df03" path="/var/lib/kubelet/pods/16fe0977-c663-4e1a-97e3-7de4ae38df03/volumes" Feb 02 11:06:31 crc kubenswrapper[4782]: I0202 11:06:31.378695 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:06:32 crc kubenswrapper[4782]: I0202 11:06:32.370533 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" event={"ID":"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9","Type":"ContainerStarted","Data":"f4762256f4358dc203e66c5c913257fa22830b06b1398f9270a2496bb4594c32"} Feb 02 11:06:32 crc kubenswrapper[4782]: I0202 11:06:32.370904 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" event={"ID":"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9","Type":"ContainerStarted","Data":"580d1d9a60156605fa15df0bef8c76e57ebca1e18461ff40d3d6df8b19f55d8c"} Feb 02 11:06:32 crc kubenswrapper[4782]: I0202 11:06:32.389476 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" podStartSLOduration=1.9400604970000002 podStartE2EDuration="2.389455689s" podCreationTimestamp="2026-02-02 11:06:30 +0000 UTC" firstStartedPulling="2026-02-02 11:06:31.385398249 +0000 UTC m=+1671.269590965" lastFinishedPulling="2026-02-02 11:06:31.834793441 +0000 UTC m=+1671.718986157" observedRunningTime="2026-02-02 11:06:32.387218204 +0000 UTC m=+1672.271410940" watchObservedRunningTime="2026-02-02 11:06:32.389455689 +0000 UTC m=+1672.273648405" Feb 02 11:06:37 crc kubenswrapper[4782]: I0202 11:06:37.410658 4782 generic.go:334] "Generic (PLEG): container finished" podID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" containerID="f4762256f4358dc203e66c5c913257fa22830b06b1398f9270a2496bb4594c32" exitCode=0 Feb 02 11:06:37 crc kubenswrapper[4782]: I0202 11:06:37.410725 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" event={"ID":"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9","Type":"ContainerDied","Data":"f4762256f4358dc203e66c5c913257fa22830b06b1398f9270a2496bb4594c32"} Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.875435 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.973446 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") pod \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.974236 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") pod \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.974378 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") pod \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\" (UID: \"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9\") " Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.978894 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc" (OuterVolumeSpecName: "kube-api-access-64tmc") pod "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" (UID: "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9"). InnerVolumeSpecName "kube-api-access-64tmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:06:38 crc kubenswrapper[4782]: I0202 11:06:38.999272 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" (UID: "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.001146 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory" (OuterVolumeSpecName: "inventory") pod "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" (UID: "79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.078632 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64tmc\" (UniqueName: \"kubernetes.io/projected/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-kube-api-access-64tmc\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.078917 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.079023 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.448439 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" event={"ID":"79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9","Type":"ContainerDied","Data":"580d1d9a60156605fa15df0bef8c76e57ebca1e18461ff40d3d6df8b19f55d8c"} Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.448475 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="580d1d9a60156605fa15df0bef8c76e57ebca1e18461ff40d3d6df8b19f55d8c" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.448594 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.529095 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:06:39 crc kubenswrapper[4782]: E0202 11:06:39.529513 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.529530 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.529738 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.530328 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.533157 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.534076 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.534263 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.534372 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.550758 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.591006 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.591127 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.591204 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: E0202 11:06:39.647103 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79fcc9ce_0ed4_4b7d_9b23_9a55d25349f9.slice/crio-580d1d9a60156605fa15df0bef8c76e57ebca1e18461ff40d3d6df8b19f55d8c\": RecentStats: unable to find data in memory cache]" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.692973 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.693459 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.693546 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.699171 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.699676 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.708801 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c5lr4\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:39 crc kubenswrapper[4782]: I0202 11:06:39.856940 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:06:41 crc kubenswrapper[4782]: I0202 11:06:41.070565 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:06:41 crc kubenswrapper[4782]: I0202 11:06:41.468342 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" event={"ID":"425704dd-e289-42f7-8b10-bd817b279099","Type":"ContainerStarted","Data":"603626e15d1c77272fcba0456289f5f80378cd91e749b2e5abd486589571d46a"} Feb 02 11:06:41 crc kubenswrapper[4782]: I0202 11:06:41.499611 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:06:42 crc kubenswrapper[4782]: I0202 11:06:42.481509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" event={"ID":"425704dd-e289-42f7-8b10-bd817b279099","Type":"ContainerStarted","Data":"d704a337ba153cb759a9029666c65419beb8b579a0125a3a01b8036bcaa12955"} Feb 02 11:06:42 crc kubenswrapper[4782]: I0202 11:06:42.527346 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" podStartSLOduration=3.101860264 podStartE2EDuration="3.527322609s" podCreationTimestamp="2026-02-02 11:06:39 +0000 UTC" firstStartedPulling="2026-02-02 11:06:41.071436777 +0000 UTC m=+1680.955629483" lastFinishedPulling="2026-02-02 11:06:41.496899112 +0000 UTC m=+1681.381091828" observedRunningTime="2026-02-02 11:06:42.517053664 +0000 UTC m=+1682.401246390" watchObservedRunningTime="2026-02-02 11:06:42.527322609 +0000 UTC m=+1682.411515325" Feb 02 11:06:43 crc kubenswrapper[4782]: I0202 11:06:43.074185 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 11:06:43 crc kubenswrapper[4782]: I0202 11:06:43.082804 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 11:06:43 crc kubenswrapper[4782]: I0202 11:06:43.094693 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-377c-account-create-update-4zm4s"] Feb 02 11:06:43 crc kubenswrapper[4782]: I0202 11:06:43.103542 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-l6d9n"] Feb 02 11:06:44 crc kubenswrapper[4782]: I0202 11:06:44.830989 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfde9ba3-fda5-496b-8ee5-52430e61f02a" path="/var/lib/kubelet/pods/bfde9ba3-fda5-496b-8ee5-52430e61f02a/volumes" Feb 02 11:06:44 crc kubenswrapper[4782]: I0202 11:06:44.831534 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce57fffc-4d75-495f-b7ed-28676054f90e" path="/var/lib/kubelet/pods/ce57fffc-4d75-495f-b7ed-28676054f90e/volumes" Feb 02 11:06:47 crc kubenswrapper[4782]: I0202 11:06:47.034352 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 11:06:47 crc kubenswrapper[4782]: I0202 11:06:47.050669 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-77ps5"] Feb 02 11:06:48 crc kubenswrapper[4782]: I0202 11:06:48.097497 4782 scope.go:117] "RemoveContainer" containerID="2a0dfecd12eefed7e04fa4bd8706afbc2a21a95326fc9fb0c721694048febe14" Feb 02 11:06:48 crc kubenswrapper[4782]: I0202 11:06:48.118776 4782 scope.go:117] "RemoveContainer" containerID="922a5052c537ca60debaeb30c310ad62b9d6cc2296c5f5cb93deeef6d784a0c2" Feb 02 11:06:48 crc kubenswrapper[4782]: I0202 11:06:48.832280 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d561a4a7-bb99-43c6-859e-e3269a35a073" path="/var/lib/kubelet/pods/d561a4a7-bb99-43c6-859e-e3269a35a073/volumes" Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.029097 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.040737 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.054424 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.064223 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6cg8m"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.072284 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0259-account-create-update-n5p89"] Feb 02 11:06:49 crc kubenswrapper[4782]: I0202 11:06:49.080842 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2124-account-create-update-npd9h"] Feb 02 11:06:50 crc kubenswrapper[4782]: I0202 11:06:50.851033 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db12436-a377-40c9-bc4e-9fe301b0b4cb" path="/var/lib/kubelet/pods/1db12436-a377-40c9-bc4e-9fe301b0b4cb/volumes" Feb 02 11:06:50 crc kubenswrapper[4782]: I0202 11:06:50.859873 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dad8de-560e-4ff5-b196-aa0bbbc2be15" path="/var/lib/kubelet/pods/80dad8de-560e-4ff5-b196-aa0bbbc2be15/volumes" Feb 02 11:06:50 crc kubenswrapper[4782]: I0202 11:06:50.860877 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b358cda4-3c47-4270-ada7-f7653d5da96f" path="/var/lib/kubelet/pods/b358cda4-3c47-4270-ada7-f7653d5da96f/volumes" Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.951712 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.951783 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.951837 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.952689 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:06:52 crc kubenswrapper[4782]: I0202 11:06:52.952747 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" gracePeriod=600 Feb 02 11:06:53 crc kubenswrapper[4782]: E0202 11:06:53.082406 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:06:53 crc kubenswrapper[4782]: I0202 11:06:53.569279 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" exitCode=0 Feb 02 11:06:53 crc kubenswrapper[4782]: I0202 11:06:53.569328 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8"} Feb 02 11:06:53 crc kubenswrapper[4782]: I0202 11:06:53.569366 4782 scope.go:117] "RemoveContainer" containerID="9e7f3d9f7d6457b5c614828f06a2a5456dc06adf6cf2e31e022d381663249dca" Feb 02 11:06:53 crc kubenswrapper[4782]: I0202 11:06:53.569964 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:06:53 crc kubenswrapper[4782]: E0202 11:06:53.570206 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:04 crc kubenswrapper[4782]: I0202 11:07:04.821218 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:04 crc kubenswrapper[4782]: E0202 11:07:04.821834 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.051543 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.069942 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.078151 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.089502 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.098286 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xzm82"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.106137 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-q97pt"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.114459 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-63a1-account-create-update-4kn5m"] Feb 02 11:07:07 crc kubenswrapper[4782]: I0202 11:07:07.121951 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6jdgj"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.046911 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.080071 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0e36-account-create-update-f5556"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.091284 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.104326 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7dbcc"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.111921 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.119341 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8017-account-create-update-t6d9m"] Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.833482 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29024188-b374-45b7-ad85-b2d4ca88b485" path="/var/lib/kubelet/pods/29024188-b374-45b7-ad85-b2d4ca88b485/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.834156 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ddb047-8931-415b-8d0f-d0f73b72c8b3" path="/var/lib/kubelet/pods/53ddb047-8931-415b-8d0f-d0f73b72c8b3/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.834870 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e5ac2b-72a8-46be-839a-fe639916a32e" path="/var/lib/kubelet/pods/68e5ac2b-72a8-46be-839a-fe639916a32e/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.835506 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8a5cce-1311-4cb0-9a7b-d636e27d6e69" path="/var/lib/kubelet/pods/7f8a5cce-1311-4cb0-9a7b-d636e27d6e69/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.836760 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821635c8-3cf1-408b-8949-81dbc48b07b6" path="/var/lib/kubelet/pods/821635c8-3cf1-408b-8949-81dbc48b07b6/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.837360 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78c9d8b-0793-4e57-8a3d-ba7303f12d37" path="/var/lib/kubelet/pods/b78c9d8b-0793-4e57-8a3d-ba7303f12d37/volumes" Feb 02 11:07:08 crc kubenswrapper[4782]: I0202 11:07:08.837978 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3c77267-9133-440d-9f4e-536b2a021fdc" path="/var/lib/kubelet/pods/c3c77267-9133-440d-9f4e-536b2a021fdc/volumes" Feb 02 11:07:15 crc kubenswrapper[4782]: I0202 11:07:15.821456 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:15 crc kubenswrapper[4782]: E0202 11:07:15.822163 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:17 crc kubenswrapper[4782]: I0202 11:07:17.033040 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 11:07:17 crc kubenswrapper[4782]: I0202 11:07:17.040995 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-v4g2v"] Feb 02 11:07:18 crc kubenswrapper[4782]: I0202 11:07:18.808087 4782 generic.go:334] "Generic (PLEG): container finished" podID="425704dd-e289-42f7-8b10-bd817b279099" containerID="d704a337ba153cb759a9029666c65419beb8b579a0125a3a01b8036bcaa12955" exitCode=0 Feb 02 11:07:18 crc kubenswrapper[4782]: I0202 11:07:18.808144 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" event={"ID":"425704dd-e289-42f7-8b10-bd817b279099","Type":"ContainerDied","Data":"d704a337ba153cb759a9029666c65419beb8b579a0125a3a01b8036bcaa12955"} Feb 02 11:07:18 crc kubenswrapper[4782]: I0202 11:07:18.832377 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843d8da2-ab8c-4938-be4b-aa67af531e1e" path="/var/lib/kubelet/pods/843d8da2-ab8c-4938-be4b-aa67af531e1e/volumes" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.256377 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.352671 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") pod \"425704dd-e289-42f7-8b10-bd817b279099\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.352903 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") pod \"425704dd-e289-42f7-8b10-bd817b279099\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.352936 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") pod \"425704dd-e289-42f7-8b10-bd817b279099\" (UID: \"425704dd-e289-42f7-8b10-bd817b279099\") " Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.370909 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5" (OuterVolumeSpecName: "kube-api-access-ckvj5") pod "425704dd-e289-42f7-8b10-bd817b279099" (UID: "425704dd-e289-42f7-8b10-bd817b279099"). InnerVolumeSpecName "kube-api-access-ckvj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.383229 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "425704dd-e289-42f7-8b10-bd817b279099" (UID: "425704dd-e289-42f7-8b10-bd817b279099"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.383685 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory" (OuterVolumeSpecName: "inventory") pod "425704dd-e289-42f7-8b10-bd817b279099" (UID: "425704dd-e289-42f7-8b10-bd817b279099"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.455542 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckvj5\" (UniqueName: \"kubernetes.io/projected/425704dd-e289-42f7-8b10-bd817b279099-kube-api-access-ckvj5\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.455587 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.455600 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/425704dd-e289-42f7-8b10-bd817b279099-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.832209 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.832727 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4" event={"ID":"425704dd-e289-42f7-8b10-bd817b279099","Type":"ContainerDied","Data":"603626e15d1c77272fcba0456289f5f80378cd91e749b2e5abd486589571d46a"} Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.832778 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="603626e15d1c77272fcba0456289f5f80378cd91e749b2e5abd486589571d46a" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.917505 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:07:20 crc kubenswrapper[4782]: E0202 11:07:20.917938 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="425704dd-e289-42f7-8b10-bd817b279099" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.917956 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="425704dd-e289-42f7-8b10-bd817b279099" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.918150 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="425704dd-e289-42f7-8b10-bd817b279099" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.918865 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.920782 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.921212 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.922290 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.923179 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.936278 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.964454 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.964545 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:20 crc kubenswrapper[4782]: I0202 11:07:20.964666 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.066000 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.066336 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.066422 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.071277 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.072236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.083983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.238422 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:21 crc kubenswrapper[4782]: W0202 11:07:21.810155 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b7bc661_fee9_41a6_a62e_0af1fc669e85.slice/crio-f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c WatchSource:0}: Error finding container f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c: Status 404 returned error can't find the container with id f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.810320 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:07:21 crc kubenswrapper[4782]: I0202 11:07:21.843625 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" event={"ID":"9b7bc661-fee9-41a6-a62e-0af1fc669e85","Type":"ContainerStarted","Data":"f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c"} Feb 02 11:07:22 crc kubenswrapper[4782]: I0202 11:07:22.850627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" event={"ID":"9b7bc661-fee9-41a6-a62e-0af1fc669e85","Type":"ContainerStarted","Data":"f5d5891e49acba50900ea7dad6534db383a08dbcaa564c467b692ffae7d6b80a"} Feb 02 11:07:26 crc kubenswrapper[4782]: I0202 11:07:26.821217 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:26 crc kubenswrapper[4782]: E0202 11:07:26.822031 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:26 crc kubenswrapper[4782]: I0202 11:07:26.884427 4782 generic.go:334] "Generic (PLEG): container finished" podID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" containerID="f5d5891e49acba50900ea7dad6534db383a08dbcaa564c467b692ffae7d6b80a" exitCode=0 Feb 02 11:07:26 crc kubenswrapper[4782]: I0202 11:07:26.884509 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" event={"ID":"9b7bc661-fee9-41a6-a62e-0af1fc669e85","Type":"ContainerDied","Data":"f5d5891e49acba50900ea7dad6534db383a08dbcaa564c467b692ffae7d6b80a"} Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.377445 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.416390 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") pod \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.416483 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") pod \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.416582 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") pod \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\" (UID: \"9b7bc661-fee9-41a6-a62e-0af1fc669e85\") " Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.451855 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5" (OuterVolumeSpecName: "kube-api-access-cdjr5") pod "9b7bc661-fee9-41a6-a62e-0af1fc669e85" (UID: "9b7bc661-fee9-41a6-a62e-0af1fc669e85"). InnerVolumeSpecName "kube-api-access-cdjr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.491990 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b7bc661-fee9-41a6-a62e-0af1fc669e85" (UID: "9b7bc661-fee9-41a6-a62e-0af1fc669e85"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.517808 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory" (OuterVolumeSpecName: "inventory") pod "9b7bc661-fee9-41a6-a62e-0af1fc669e85" (UID: "9b7bc661-fee9-41a6-a62e-0af1fc669e85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.519235 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.519258 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdjr5\" (UniqueName: \"kubernetes.io/projected/9b7bc661-fee9-41a6-a62e-0af1fc669e85-kube-api-access-cdjr5\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.519274 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b7bc661-fee9-41a6-a62e-0af1fc669e85-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.903702 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" event={"ID":"9b7bc661-fee9-41a6-a62e-0af1fc669e85","Type":"ContainerDied","Data":"f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c"} Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.903749 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e31a0009614d333d71e62fb83ac024e15efeda63ce3a0a13183461b0e8b18c" Feb 02 11:07:28 crc kubenswrapper[4782]: I0202 11:07:28.903833 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.990087 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:07:29 crc kubenswrapper[4782]: E0202 11:07:28.990944 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.990963 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.991240 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.992133 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.995222 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.999431 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.999768 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:28.999919 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.035887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.035980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.036058 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.044614 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.137237 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.137321 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.137393 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.142336 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.142886 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.154734 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.349073 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.881670 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:07:29 crc kubenswrapper[4782]: W0202 11:07:29.895810 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd4eb6e0_afff_43a6_af04_0193fa711a9a.slice/crio-388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf WatchSource:0}: Error finding container 388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf: Status 404 returned error can't find the container with id 388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf Feb 02 11:07:29 crc kubenswrapper[4782]: I0202 11:07:29.915676 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" event={"ID":"fd4eb6e0-afff-43a6-af04-0193fa711a9a","Type":"ContainerStarted","Data":"388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf"} Feb 02 11:07:30 crc kubenswrapper[4782]: I0202 11:07:30.925808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" event={"ID":"fd4eb6e0-afff-43a6-af04-0193fa711a9a","Type":"ContainerStarted","Data":"0e893f699ea19753909fb2dc54c9a946d6efd297f534f8a2fd10b438cd438ecd"} Feb 02 11:07:39 crc kubenswrapper[4782]: I0202 11:07:39.822172 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:39 crc kubenswrapper[4782]: E0202 11:07:39.822983 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:43 crc kubenswrapper[4782]: I0202 11:07:43.047268 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" podStartSLOduration=14.631260372 podStartE2EDuration="15.047231944s" podCreationTimestamp="2026-02-02 11:07:28 +0000 UTC" firstStartedPulling="2026-02-02 11:07:29.899276035 +0000 UTC m=+1729.783468751" lastFinishedPulling="2026-02-02 11:07:30.315247607 +0000 UTC m=+1730.199440323" observedRunningTime="2026-02-02 11:07:30.942789707 +0000 UTC m=+1730.826982433" watchObservedRunningTime="2026-02-02 11:07:43.047231944 +0000 UTC m=+1742.931424660" Feb 02 11:07:43 crc kubenswrapper[4782]: I0202 11:07:43.053049 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 11:07:43 crc kubenswrapper[4782]: I0202 11:07:43.063690 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bwx58"] Feb 02 11:07:44 crc kubenswrapper[4782]: I0202 11:07:44.834861 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0" path="/var/lib/kubelet/pods/1f885e8a-3dc8-4c07-ae3c-4c8ab072abc0/volumes" Feb 02 11:07:47 crc kubenswrapper[4782]: I0202 11:07:47.033849 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 11:07:47 crc kubenswrapper[4782]: I0202 11:07:47.043407 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ztmll"] Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.266476 4782 scope.go:117] "RemoveContainer" containerID="86c67676caca480b43ace8b3b556dc1c7777a8a4b569eb0de34ba6545c1ccf6c" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.296768 4782 scope.go:117] "RemoveContainer" containerID="d918711ae10925784d0ab83a02dc8d40b553f98643dc5469d54bc38912d8020e" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.331381 4782 scope.go:117] "RemoveContainer" containerID="266185adfe7e4eb354941537aab95c70eb532acbac93a799d1b437d19b25b6c7" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.404553 4782 scope.go:117] "RemoveContainer" containerID="6d8d47213c18788507ca77e5f6162eb6c017b157cfec70f1dfb0ba7075187097" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.423883 4782 scope.go:117] "RemoveContainer" containerID="8e0d398b0286ba353cd173b897d449b2563fd8596bcbf1161ae3a708c88b87ef" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.465696 4782 scope.go:117] "RemoveContainer" containerID="9024b9d44f2a9c5bd7aaa4dc9abd2a12f77d4a6bdfae488ee552a49cc6449554" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.515984 4782 scope.go:117] "RemoveContainer" containerID="f0e0c9ec29176a7805dbb55ca0554bf08656a1bb5da6a9295d6c51196f8c9acf" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.535789 4782 scope.go:117] "RemoveContainer" containerID="91ff00aa29fb6af4c20c4ab6c7010da35390db314e4a9d0dc6101bd74c8cfe7c" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.556464 4782 scope.go:117] "RemoveContainer" containerID="a2c467e584e5732352c3aaba01db962a0b1958e32d2c79a6365d1b8fe2d96e2c" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.611556 4782 scope.go:117] "RemoveContainer" containerID="469ae18dd42598dba552dffdd5607faf35c16e63cd9d3d0f900d45cc0954f86f" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.628358 4782 scope.go:117] "RemoveContainer" containerID="a7e9a4e8ac03aa75d7d2867e0b6e6e12cc8a9019e7c6d838c9869a17f5c4688b" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.645780 4782 scope.go:117] "RemoveContainer" containerID="489df32e7e5f6a2407566ee0433e9eb8f24a84a3bc401deba1b69bf5b52b02e2" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.664624 4782 scope.go:117] "RemoveContainer" containerID="642f3a52732b34bccf9c9fbc304bd2cfce8dc967c11a5c31acc742832089e402" Feb 02 11:07:48 crc kubenswrapper[4782]: I0202 11:07:48.837945 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8943d8a-337b-4852-9c11-55191a08a850" path="/var/lib/kubelet/pods/f8943d8a-337b-4852-9c11-55191a08a850/volumes" Feb 02 11:07:51 crc kubenswrapper[4782]: I0202 11:07:51.094630 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 11:07:51 crc kubenswrapper[4782]: I0202 11:07:51.107235 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9zhdd"] Feb 02 11:07:52 crc kubenswrapper[4782]: I0202 11:07:52.833674 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="173458b2-9a63-4456-9bc9-698d1414a679" path="/var/lib/kubelet/pods/173458b2-9a63-4456-9bc9-698d1414a679/volumes" Feb 02 11:07:54 crc kubenswrapper[4782]: I0202 11:07:54.822370 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:07:54 crc kubenswrapper[4782]: E0202 11:07:54.822924 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:07:55 crc kubenswrapper[4782]: I0202 11:07:55.039546 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 11:07:55 crc kubenswrapper[4782]: I0202 11:07:55.051397 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t58qc"] Feb 02 11:07:56 crc kubenswrapper[4782]: I0202 11:07:56.834199 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45d6513-2de0-4ece-bbbc-26c6780cd145" path="/var/lib/kubelet/pods/f45d6513-2de0-4ece-bbbc-26c6780cd145/volumes" Feb 02 11:08:07 crc kubenswrapper[4782]: I0202 11:08:07.034441 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 11:08:07 crc kubenswrapper[4782]: I0202 11:08:07.045898 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qjtml"] Feb 02 11:08:07 crc kubenswrapper[4782]: I0202 11:08:07.821840 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:08:07 crc kubenswrapper[4782]: E0202 11:08:07.822169 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:08:08 crc kubenswrapper[4782]: I0202 11:08:08.834525 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e3fab7-be93-409c-a88e-85c8d0ca533c" path="/var/lib/kubelet/pods/14e3fab7-be93-409c-a88e-85c8d0ca533c/volumes" Feb 02 11:08:12 crc kubenswrapper[4782]: I0202 11:08:12.047095 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 11:08:12 crc kubenswrapper[4782]: I0202 11:08:12.061030 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rvrqj"] Feb 02 11:08:12 crc kubenswrapper[4782]: I0202 11:08:12.833277 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4fe919-15fe-4478-be0f-8e3bf00147b4" path="/var/lib/kubelet/pods/bf4fe919-15fe-4478-be0f-8e3bf00147b4/volumes" Feb 02 11:08:18 crc kubenswrapper[4782]: I0202 11:08:18.362623 4782 generic.go:334] "Generic (PLEG): container finished" podID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" containerID="0e893f699ea19753909fb2dc54c9a946d6efd297f534f8a2fd10b438cd438ecd" exitCode=0 Feb 02 11:08:18 crc kubenswrapper[4782]: I0202 11:08:18.363265 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" event={"ID":"fd4eb6e0-afff-43a6-af04-0193fa711a9a","Type":"ContainerDied","Data":"0e893f699ea19753909fb2dc54c9a946d6efd297f534f8a2fd10b438cd438ecd"} Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.816218 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.901818 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") pod \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.902050 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") pod \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.902104 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") pod \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\" (UID: \"fd4eb6e0-afff-43a6-af04-0193fa711a9a\") " Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.913981 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4" (OuterVolumeSpecName: "kube-api-access-j4hj4") pod "fd4eb6e0-afff-43a6-af04-0193fa711a9a" (UID: "fd4eb6e0-afff-43a6-af04-0193fa711a9a"). InnerVolumeSpecName "kube-api-access-j4hj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.929878 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd4eb6e0-afff-43a6-af04-0193fa711a9a" (UID: "fd4eb6e0-afff-43a6-af04-0193fa711a9a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:19 crc kubenswrapper[4782]: I0202 11:08:19.935633 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory" (OuterVolumeSpecName: "inventory") pod "fd4eb6e0-afff-43a6-af04-0193fa711a9a" (UID: "fd4eb6e0-afff-43a6-af04-0193fa711a9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.009081 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.009138 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd4eb6e0-afff-43a6-af04-0193fa711a9a-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.009153 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4hj4\" (UniqueName: \"kubernetes.io/projected/fd4eb6e0-afff-43a6-af04-0193fa711a9a-kube-api-access-j4hj4\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.382038 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" event={"ID":"fd4eb6e0-afff-43a6-af04-0193fa711a9a","Type":"ContainerDied","Data":"388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf"} Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.382080 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388e841c194dbe71d449278b9129eaafcd23cad96bde46e007a82f77fbdd76bf" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.382095 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.475967 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:08:20 crc kubenswrapper[4782]: E0202 11:08:20.476324 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.476338 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.476479 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.477052 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.480029 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.481914 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.482162 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.482282 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.495925 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.621781 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.622480 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.622744 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.725127 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.725311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.726508 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.735607 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.735694 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.749416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") pod \"ssh-known-hosts-edpm-deployment-fdzds\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:20 crc kubenswrapper[4782]: I0202 11:08:20.803177 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:21 crc kubenswrapper[4782]: I0202 11:08:21.362624 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:08:21 crc kubenswrapper[4782]: I0202 11:08:21.395881 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" event={"ID":"96961a4d-2144-4ca9-852f-f624c591bf50","Type":"ContainerStarted","Data":"dda7b8319db5e2d132b70457bf315d27e55157a4d078744fcda722fc4a503506"} Feb 02 11:08:21 crc kubenswrapper[4782]: I0202 11:08:21.822251 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:08:21 crc kubenswrapper[4782]: E0202 11:08:21.822605 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:08:22 crc kubenswrapper[4782]: I0202 11:08:22.412840 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" event={"ID":"96961a4d-2144-4ca9-852f-f624c591bf50","Type":"ContainerStarted","Data":"a3652484620aa178a685846c99f7de4b05cf1ea0f50bee5c8828a07d27f3b419"} Feb 02 11:08:22 crc kubenswrapper[4782]: I0202 11:08:22.437933 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" podStartSLOduration=2.027112531 podStartE2EDuration="2.437910265s" podCreationTimestamp="2026-02-02 11:08:20 +0000 UTC" firstStartedPulling="2026-02-02 11:08:21.376930119 +0000 UTC m=+1781.261122835" lastFinishedPulling="2026-02-02 11:08:21.787727853 +0000 UTC m=+1781.671920569" observedRunningTime="2026-02-02 11:08:22.426101675 +0000 UTC m=+1782.310294391" watchObservedRunningTime="2026-02-02 11:08:22.437910265 +0000 UTC m=+1782.322102981" Feb 02 11:08:29 crc kubenswrapper[4782]: I0202 11:08:29.474811 4782 generic.go:334] "Generic (PLEG): container finished" podID="96961a4d-2144-4ca9-852f-f624c591bf50" containerID="a3652484620aa178a685846c99f7de4b05cf1ea0f50bee5c8828a07d27f3b419" exitCode=0 Feb 02 11:08:29 crc kubenswrapper[4782]: I0202 11:08:29.474971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" event={"ID":"96961a4d-2144-4ca9-852f-f624c591bf50","Type":"ContainerDied","Data":"a3652484620aa178a685846c99f7de4b05cf1ea0f50bee5c8828a07d27f3b419"} Feb 02 11:08:30 crc kubenswrapper[4782]: I0202 11:08:30.958747 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.020136 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") pod \"96961a4d-2144-4ca9-852f-f624c591bf50\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.020244 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") pod \"96961a4d-2144-4ca9-852f-f624c591bf50\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.020295 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") pod \"96961a4d-2144-4ca9-852f-f624c591bf50\" (UID: \"96961a4d-2144-4ca9-852f-f624c591bf50\") " Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.030841 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs" (OuterVolumeSpecName: "kube-api-access-k2wvs") pod "96961a4d-2144-4ca9-852f-f624c591bf50" (UID: "96961a4d-2144-4ca9-852f-f624c591bf50"). InnerVolumeSpecName "kube-api-access-k2wvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.054560 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "96961a4d-2144-4ca9-852f-f624c591bf50" (UID: "96961a4d-2144-4ca9-852f-f624c591bf50"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.094088 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96961a4d-2144-4ca9-852f-f624c591bf50" (UID: "96961a4d-2144-4ca9-852f-f624c591bf50"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.122862 4782 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.122907 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96961a4d-2144-4ca9-852f-f624c591bf50-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.122929 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2wvs\" (UniqueName: \"kubernetes.io/projected/96961a4d-2144-4ca9-852f-f624c591bf50-kube-api-access-k2wvs\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.492488 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" event={"ID":"96961a4d-2144-4ca9-852f-f624c591bf50","Type":"ContainerDied","Data":"dda7b8319db5e2d132b70457bf315d27e55157a4d078744fcda722fc4a503506"} Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.492529 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dda7b8319db5e2d132b70457bf315d27e55157a4d078744fcda722fc4a503506" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.492597 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fdzds" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.576501 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:08:31 crc kubenswrapper[4782]: E0202 11:08:31.576998 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96961a4d-2144-4ca9-852f-f624c591bf50" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.577024 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="96961a4d-2144-4ca9-852f-f624c591bf50" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.577240 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="96961a4d-2144-4ca9-852f-f624c591bf50" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.578034 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.580981 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.584040 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.584255 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.585817 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.604260 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.631705 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.631776 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.632130 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.733453 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.733516 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.733684 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.738109 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.739257 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.754136 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fnhqj\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:31 crc kubenswrapper[4782]: I0202 11:08:31.908247 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:32 crc kubenswrapper[4782]: I0202 11:08:32.472168 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:08:32 crc kubenswrapper[4782]: I0202 11:08:32.504022 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" event={"ID":"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63","Type":"ContainerStarted","Data":"6f5e81442e7a3835f18b2d3c1891db6eb302fa70bd8b3812f6c59e9ba5490a3c"} Feb 02 11:08:33 crc kubenswrapper[4782]: I0202 11:08:33.530145 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" event={"ID":"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63","Type":"ContainerStarted","Data":"602d6c4af1ac8198091f7c68ae19fa3196c9ffcd0620c4b0def39e668a4ec792"} Feb 02 11:08:33 crc kubenswrapper[4782]: I0202 11:08:33.821509 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:08:33 crc kubenswrapper[4782]: E0202 11:08:33.821770 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:08:41 crc kubenswrapper[4782]: I0202 11:08:41.597062 4782 generic.go:334] "Generic (PLEG): container finished" podID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" containerID="602d6c4af1ac8198091f7c68ae19fa3196c9ffcd0620c4b0def39e668a4ec792" exitCode=0 Feb 02 11:08:41 crc kubenswrapper[4782]: I0202 11:08:41.597113 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" event={"ID":"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63","Type":"ContainerDied","Data":"602d6c4af1ac8198091f7c68ae19fa3196c9ffcd0620c4b0def39e668a4ec792"} Feb 02 11:08:42 crc kubenswrapper[4782]: I0202 11:08:42.991736 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.030171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") pod \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.030404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") pod \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.030470 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") pod \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\" (UID: \"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63\") " Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.038206 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz" (OuterVolumeSpecName: "kube-api-access-t98jz") pod "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" (UID: "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63"). InnerVolumeSpecName "kube-api-access-t98jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.056793 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory" (OuterVolumeSpecName: "inventory") pod "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" (UID: "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.056835 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" (UID: "cdbaecb3-a52c-45c2-aa69-a9eac6ffea63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.132767 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.132832 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t98jz\" (UniqueName: \"kubernetes.io/projected/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-kube-api-access-t98jz\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.132845 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.614335 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" event={"ID":"cdbaecb3-a52c-45c2-aa69-a9eac6ffea63","Type":"ContainerDied","Data":"6f5e81442e7a3835f18b2d3c1891db6eb302fa70bd8b3812f6c59e9ba5490a3c"} Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.614383 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f5e81442e7a3835f18b2d3c1891db6eb302fa70bd8b3812f6c59e9ba5490a3c" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.614394 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.708127 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:08:43 crc kubenswrapper[4782]: E0202 11:08:43.708509 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.708535 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.708779 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.716061 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.717764 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.717850 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.718577 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.718813 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.719284 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.845461 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.845529 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.845611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.947259 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.947455 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.947514 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.958321 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.965117 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:43 crc kubenswrapper[4782]: I0202 11:08:43.982254 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:44 crc kubenswrapper[4782]: I0202 11:08:44.040455 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:44 crc kubenswrapper[4782]: I0202 11:08:44.567966 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:08:44 crc kubenswrapper[4782]: I0202 11:08:44.624943 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" event={"ID":"ce2c78bc-99b3-4deb-871f-923a3a42d5ff","Type":"ContainerStarted","Data":"89f6caa577019f7d8bf7fd27767a38e45ca905d949b88e53e2d3fdab397bb35f"} Feb 02 11:08:45 crc kubenswrapper[4782]: I0202 11:08:45.637634 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" event={"ID":"ce2c78bc-99b3-4deb-871f-923a3a42d5ff","Type":"ContainerStarted","Data":"1d498ad8d1bfb2287778416cd5cee7768c1031fd2150b0b80bdee8cb15dfaffd"} Feb 02 11:08:45 crc kubenswrapper[4782]: I0202 11:08:45.667264 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" podStartSLOduration=2.201148819 podStartE2EDuration="2.667247342s" podCreationTimestamp="2026-02-02 11:08:43 +0000 UTC" firstStartedPulling="2026-02-02 11:08:44.568166782 +0000 UTC m=+1804.452359518" lastFinishedPulling="2026-02-02 11:08:45.034265315 +0000 UTC m=+1804.918458041" observedRunningTime="2026-02-02 11:08:45.658835461 +0000 UTC m=+1805.543028197" watchObservedRunningTime="2026-02-02 11:08:45.667247342 +0000 UTC m=+1805.551440048" Feb 02 11:08:46 crc kubenswrapper[4782]: I0202 11:08:46.821337 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:08:46 crc kubenswrapper[4782]: E0202 11:08:46.821592 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:08:48 crc kubenswrapper[4782]: I0202 11:08:48.916038 4782 scope.go:117] "RemoveContainer" containerID="882286d92ef94b177095925f1761989436448214282f382f07a04e273ec62549" Feb 02 11:08:48 crc kubenswrapper[4782]: I0202 11:08:48.952768 4782 scope.go:117] "RemoveContainer" containerID="f96dc9d1eca03acac5731eacf624fbd7091513cfed0cc461bda4976a5d7b4254" Feb 02 11:08:49 crc kubenswrapper[4782]: I0202 11:08:49.002759 4782 scope.go:117] "RemoveContainer" containerID="86ae63a42dd213a82d90c920d379402488562da05112fd3a36da50fdfc632f7d" Feb 02 11:08:49 crc kubenswrapper[4782]: I0202 11:08:49.033167 4782 scope.go:117] "RemoveContainer" containerID="bb8bee75583f03091be99a3eb7b070a749409afcb16ccfe4ae7f61a996ce78c5" Feb 02 11:08:49 crc kubenswrapper[4782]: I0202 11:08:49.089742 4782 scope.go:117] "RemoveContainer" containerID="e47203a1a44b3b88fecb28ffdf42000d4b85a4d8f915c7dc05cd21438f5304c4" Feb 02 11:08:54 crc kubenswrapper[4782]: I0202 11:08:54.714476 4782 generic.go:334] "Generic (PLEG): container finished" podID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" containerID="1d498ad8d1bfb2287778416cd5cee7768c1031fd2150b0b80bdee8cb15dfaffd" exitCode=0 Feb 02 11:08:54 crc kubenswrapper[4782]: I0202 11:08:54.714670 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" event={"ID":"ce2c78bc-99b3-4deb-871f-923a3a42d5ff","Type":"ContainerDied","Data":"1d498ad8d1bfb2287778416cd5cee7768c1031fd2150b0b80bdee8cb15dfaffd"} Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.063679 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.092812 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.098844 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.110921 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.122670 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.130082 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3e7e-account-create-update-n4kct"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.143443 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9147-account-create-update-qcs9t"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.150984 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-964hl"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.160776 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-j8z8n"] Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.162185 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") pod \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.162331 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") pod \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.162547 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") pod \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\" (UID: \"ce2c78bc-99b3-4deb-871f-923a3a42d5ff\") " Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.174230 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq" (OuterVolumeSpecName: "kube-api-access-djdgq") pod "ce2c78bc-99b3-4deb-871f-923a3a42d5ff" (UID: "ce2c78bc-99b3-4deb-871f-923a3a42d5ff"). InnerVolumeSpecName "kube-api-access-djdgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.204379 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce2c78bc-99b3-4deb-871f-923a3a42d5ff" (UID: "ce2c78bc-99b3-4deb-871f-923a3a42d5ff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.218481 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory" (OuterVolumeSpecName: "inventory") pod "ce2c78bc-99b3-4deb-871f-923a3a42d5ff" (UID: "ce2c78bc-99b3-4deb-871f-923a3a42d5ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.265177 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djdgq\" (UniqueName: \"kubernetes.io/projected/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-kube-api-access-djdgq\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.265208 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.265220 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce2c78bc-99b3-4deb-871f-923a3a42d5ff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.730998 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" event={"ID":"ce2c78bc-99b3-4deb-871f-923a3a42d5ff","Type":"ContainerDied","Data":"89f6caa577019f7d8bf7fd27767a38e45ca905d949b88e53e2d3fdab397bb35f"} Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.731042 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f6caa577019f7d8bf7fd27767a38e45ca905d949b88e53e2d3fdab397bb35f" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.731108 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.831444 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07bbffca-46a4-4693-ae3f-011a5ee0e317" path="/var/lib/kubelet/pods/07bbffca-46a4-4693-ae3f-011a5ee0e317/volumes" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.832395 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0abc6f3c-1f7d-4f48-8beb-205307984cdc" path="/var/lib/kubelet/pods/0abc6f3c-1f7d-4f48-8beb-205307984cdc/volumes" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.833121 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9a0fe2-4862-47e1-91d0-553d95235f39" path="/var/lib/kubelet/pods/6a9a0fe2-4862-47e1-91d0-553d95235f39/volumes" Feb 02 11:08:56 crc kubenswrapper[4782]: I0202 11:08:56.833802 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b55df6c-8971-415a-a934-0ec48a149b81" path="/var/lib/kubelet/pods/8b55df6c-8971-415a-a934-0ec48a149b81/volumes" Feb 02 11:08:57 crc kubenswrapper[4782]: I0202 11:08:57.039993 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 11:08:57 crc kubenswrapper[4782]: I0202 11:08:57.051048 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 11:08:57 crc kubenswrapper[4782]: I0202 11:08:57.061086 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-627f-account-create-update-h6hdk"] Feb 02 11:08:57 crc kubenswrapper[4782]: I0202 11:08:57.068782 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jnw6j"] Feb 02 11:08:58 crc kubenswrapper[4782]: I0202 11:08:58.831954 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b75d8c-9435-483f-8e95-97690314cfb5" path="/var/lib/kubelet/pods/a9b75d8c-9435-483f-8e95-97690314cfb5/volumes" Feb 02 11:08:58 crc kubenswrapper[4782]: I0202 11:08:58.832516 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5eccd3e-f895-4c2f-a1e5-c337a89d2439" path="/var/lib/kubelet/pods/c5eccd3e-f895-4c2f-a1e5-c337a89d2439/volumes" Feb 02 11:09:01 crc kubenswrapper[4782]: I0202 11:09:01.821149 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:01 crc kubenswrapper[4782]: E0202 11:09:01.821992 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:09:16 crc kubenswrapper[4782]: I0202 11:09:16.828621 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:16 crc kubenswrapper[4782]: E0202 11:09:16.829505 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:09:25 crc kubenswrapper[4782]: I0202 11:09:25.069854 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 11:09:25 crc kubenswrapper[4782]: I0202 11:09:25.080205 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lcdcm"] Feb 02 11:09:26 crc kubenswrapper[4782]: I0202 11:09:26.833954 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b52751-0177-4fa7-8d87-fca1cab9a096" path="/var/lib/kubelet/pods/f0b52751-0177-4fa7-8d87-fca1cab9a096/volumes" Feb 02 11:09:30 crc kubenswrapper[4782]: I0202 11:09:30.826306 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:30 crc kubenswrapper[4782]: E0202 11:09:30.826760 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:09:44 crc kubenswrapper[4782]: I0202 11:09:44.821949 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:44 crc kubenswrapper[4782]: E0202 11:09:44.822786 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.242610 4782 scope.go:117] "RemoveContainer" containerID="59303b0d2ccdb82f829b283e200498f7a3b29c09b53180da767f3025ed87821c" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.275563 4782 scope.go:117] "RemoveContainer" containerID="938ebd6bbe46ccc6431b3d92e3b6f8803ade372fd58b9fe07b9f065675fc25c4" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.307441 4782 scope.go:117] "RemoveContainer" containerID="ec0e250135ad643a0376384574da7a7800b3dd64125badf008a16dd100e20d1b" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.344043 4782 scope.go:117] "RemoveContainer" containerID="0e49e7a8577a45cc63b62f6e59ab36faf5118b4383cec84de5d8b281d39fd041" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.385936 4782 scope.go:117] "RemoveContainer" containerID="23b00d976eb1b671e56fad532c67af4f3f0fb48695bf8d78a64de1654d16975f" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.429780 4782 scope.go:117] "RemoveContainer" containerID="7930f426b6752d1ae7cd1f189cd59a47f3d0e3b099f35ef79f6b78e86ed5ab0d" Feb 02 11:09:49 crc kubenswrapper[4782]: I0202 11:09:49.475382 4782 scope.go:117] "RemoveContainer" containerID="c9a22a15fdf9c10f8fa6ebae4f0ac6052d277f6b5e54ac311112c99327d4ce45" Feb 02 11:09:50 crc kubenswrapper[4782]: I0202 11:09:50.032869 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 11:09:50 crc kubenswrapper[4782]: I0202 11:09:50.041315 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5wtv6"] Feb 02 11:09:50 crc kubenswrapper[4782]: I0202 11:09:50.832442 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa0ea9b-5d59-4094-a259-2f841d40db2c" path="/var/lib/kubelet/pods/baa0ea9b-5d59-4094-a259-2f841d40db2c/volumes" Feb 02 11:09:51 crc kubenswrapper[4782]: I0202 11:09:51.051086 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 11:09:51 crc kubenswrapper[4782]: I0202 11:09:51.059496 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fb5lz"] Feb 02 11:09:52 crc kubenswrapper[4782]: I0202 11:09:52.832300 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d87918f-7c3d-4932-a4bd-18a2cf9fc199" path="/var/lib/kubelet/pods/5d87918f-7c3d-4932-a4bd-18a2cf9fc199/volumes" Feb 02 11:09:59 crc kubenswrapper[4782]: I0202 11:09:59.821449 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:09:59 crc kubenswrapper[4782]: E0202 11:09:59.822085 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:10:11 crc kubenswrapper[4782]: I0202 11:10:11.821266 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:10:11 crc kubenswrapper[4782]: E0202 11:10:11.822121 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:10:26 crc kubenswrapper[4782]: I0202 11:10:26.821629 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:10:26 crc kubenswrapper[4782]: E0202 11:10:26.822939 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:10:33 crc kubenswrapper[4782]: I0202 11:10:33.042497 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:10:33 crc kubenswrapper[4782]: I0202 11:10:33.049877 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lxwch"] Feb 02 11:10:34 crc kubenswrapper[4782]: I0202 11:10:34.832355 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d921bd77-679d-4722-8238-a75dc4f3b6b5" path="/var/lib/kubelet/pods/d921bd77-679d-4722-8238-a75dc4f3b6b5/volumes" Feb 02 11:10:41 crc kubenswrapper[4782]: I0202 11:10:41.820711 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:10:41 crc kubenswrapper[4782]: E0202 11:10:41.821502 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:10:49 crc kubenswrapper[4782]: I0202 11:10:49.657346 4782 scope.go:117] "RemoveContainer" containerID="730902e09b299cdd00a01ece9539dce44aec0c2aaecd122d8a4c41d8be4117fb" Feb 02 11:10:49 crc kubenswrapper[4782]: I0202 11:10:49.715870 4782 scope.go:117] "RemoveContainer" containerID="31af4ce695c2a4475fc8775213cd57460451e43f7c30b86186b9592d2359448f" Feb 02 11:10:49 crc kubenswrapper[4782]: I0202 11:10:49.762484 4782 scope.go:117] "RemoveContainer" containerID="8185fbc7b3d30cf6bb76bc01518fb63e05726e26ac97fb50e13e8ad1440798ce" Feb 02 11:10:56 crc kubenswrapper[4782]: I0202 11:10:56.822028 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:10:56 crc kubenswrapper[4782]: E0202 11:10:56.822787 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:07 crc kubenswrapper[4782]: I0202 11:11:07.821835 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:07 crc kubenswrapper[4782]: E0202 11:11:07.823819 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:18 crc kubenswrapper[4782]: I0202 11:11:18.821247 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:18 crc kubenswrapper[4782]: E0202 11:11:18.822005 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:30 crc kubenswrapper[4782]: I0202 11:11:30.825803 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:30 crc kubenswrapper[4782]: E0202 11:11:30.826542 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:41 crc kubenswrapper[4782]: I0202 11:11:41.821077 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:41 crc kubenswrapper[4782]: E0202 11:11:41.822024 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:11:56 crc kubenswrapper[4782]: I0202 11:11:56.825233 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:11:57 crc kubenswrapper[4782]: I0202 11:11:57.182384 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296"} Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.769735 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:13:50 crc kubenswrapper[4782]: E0202 11:13:50.770711 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.770729 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.770925 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.779588 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.792866 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.882850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.882899 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.883057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.984391 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.984508 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.984527 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.985065 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:50 crc kubenswrapper[4782]: I0202 11:13:50.985101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:51 crc kubenswrapper[4782]: I0202 11:13:51.007033 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") pod \"redhat-operators-bbdvz\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:51 crc kubenswrapper[4782]: I0202 11:13:51.101885 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:13:51 crc kubenswrapper[4782]: I0202 11:13:51.617608 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:13:52 crc kubenswrapper[4782]: I0202 11:13:52.094141 4782 generic.go:334] "Generic (PLEG): container finished" podID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerID="48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69" exitCode=0 Feb 02 11:13:52 crc kubenswrapper[4782]: I0202 11:13:52.094190 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerDied","Data":"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69"} Feb 02 11:13:52 crc kubenswrapper[4782]: I0202 11:13:52.094219 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerStarted","Data":"5a67a52e538a73e40ce3e5ef726731d2f0ea703d21e46a76a2b3a75ad88b7041"} Feb 02 11:13:52 crc kubenswrapper[4782]: I0202 11:13:52.096453 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:13:54 crc kubenswrapper[4782]: I0202 11:13:54.112528 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerStarted","Data":"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed"} Feb 02 11:13:59 crc kubenswrapper[4782]: E0202 11:13:59.611125 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode98159ff_6334_40e3_8649_a4b880e9dcca.slice/crio-conmon-ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed.scope\": RecentStats: unable to find data in memory cache]" Feb 02 11:14:00 crc kubenswrapper[4782]: I0202 11:14:00.166916 4782 generic.go:334] "Generic (PLEG): container finished" podID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerID="ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed" exitCode=0 Feb 02 11:14:00 crc kubenswrapper[4782]: I0202 11:14:00.166958 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerDied","Data":"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed"} Feb 02 11:14:01 crc kubenswrapper[4782]: I0202 11:14:01.179358 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerStarted","Data":"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd"} Feb 02 11:14:01 crc kubenswrapper[4782]: I0202 11:14:01.197953 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bbdvz" podStartSLOduration=2.66942593 podStartE2EDuration="11.197934565s" podCreationTimestamp="2026-02-02 11:13:50 +0000 UTC" firstStartedPulling="2026-02-02 11:13:52.096191069 +0000 UTC m=+2111.980383785" lastFinishedPulling="2026-02-02 11:14:00.624699704 +0000 UTC m=+2120.508892420" observedRunningTime="2026-02-02 11:14:01.196404031 +0000 UTC m=+2121.080596747" watchObservedRunningTime="2026-02-02 11:14:01.197934565 +0000 UTC m=+2121.082127281" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.102847 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.103461 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.158384 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.298171 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:11 crc kubenswrapper[4782]: I0202 11:14:11.403282 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.267687 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bbdvz" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="registry-server" containerID="cri-o://98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" gracePeriod=2 Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.701181 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.809498 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") pod \"e98159ff-6334-40e3-8649-a4b880e9dcca\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.809578 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") pod \"e98159ff-6334-40e3-8649-a4b880e9dcca\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.809890 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") pod \"e98159ff-6334-40e3-8649-a4b880e9dcca\" (UID: \"e98159ff-6334-40e3-8649-a4b880e9dcca\") " Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.810712 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities" (OuterVolumeSpecName: "utilities") pod "e98159ff-6334-40e3-8649-a4b880e9dcca" (UID: "e98159ff-6334-40e3-8649-a4b880e9dcca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.810983 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.826381 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w" (OuterVolumeSpecName: "kube-api-access-7qd8w") pod "e98159ff-6334-40e3-8649-a4b880e9dcca" (UID: "e98159ff-6334-40e3-8649-a4b880e9dcca"). InnerVolumeSpecName "kube-api-access-7qd8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.912720 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qd8w\" (UniqueName: \"kubernetes.io/projected/e98159ff-6334-40e3-8649-a4b880e9dcca-kube-api-access-7qd8w\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:13 crc kubenswrapper[4782]: I0202 11:14:13.950032 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e98159ff-6334-40e3-8649-a4b880e9dcca" (UID: "e98159ff-6334-40e3-8649-a4b880e9dcca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.014605 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98159ff-6334-40e3-8649-a4b880e9dcca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.276944 4782 generic.go:334] "Generic (PLEG): container finished" podID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerID="98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" exitCode=0 Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.276991 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerDied","Data":"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd"} Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.277058 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bbdvz" event={"ID":"e98159ff-6334-40e3-8649-a4b880e9dcca","Type":"ContainerDied","Data":"5a67a52e538a73e40ce3e5ef726731d2f0ea703d21e46a76a2b3a75ad88b7041"} Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.277080 4782 scope.go:117] "RemoveContainer" containerID="98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.277744 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bbdvz" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.294978 4782 scope.go:117] "RemoveContainer" containerID="ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.308914 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.321385 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bbdvz"] Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.330327 4782 scope.go:117] "RemoveContainer" containerID="48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.365597 4782 scope.go:117] "RemoveContainer" containerID="98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" Feb 02 11:14:14 crc kubenswrapper[4782]: E0202 11:14:14.366540 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd\": container with ID starting with 98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd not found: ID does not exist" containerID="98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.366615 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd"} err="failed to get container status \"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd\": rpc error: code = NotFound desc = could not find container \"98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd\": container with ID starting with 98ab1db2db1be405e293ab994528ac490b719e551cb6bb7fed162290cd7688cd not found: ID does not exist" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.366664 4782 scope.go:117] "RemoveContainer" containerID="ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed" Feb 02 11:14:14 crc kubenswrapper[4782]: E0202 11:14:14.367166 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed\": container with ID starting with ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed not found: ID does not exist" containerID="ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.367191 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed"} err="failed to get container status \"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed\": rpc error: code = NotFound desc = could not find container \"ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed\": container with ID starting with ecaf794f8e093ef8de66729c1255dd84fbf9b321699a90723e6c1972d952c0ed not found: ID does not exist" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.367205 4782 scope.go:117] "RemoveContainer" containerID="48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69" Feb 02 11:14:14 crc kubenswrapper[4782]: E0202 11:14:14.367979 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69\": container with ID starting with 48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69 not found: ID does not exist" containerID="48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.368003 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69"} err="failed to get container status \"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69\": rpc error: code = NotFound desc = could not find container \"48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69\": container with ID starting with 48dac4dfbec7f2af762af26655f35646e50d40b09c2012b4833b8fc561ea5b69 not found: ID does not exist" Feb 02 11:14:14 crc kubenswrapper[4782]: I0202 11:14:14.833477 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" path="/var/lib/kubelet/pods/e98159ff-6334-40e3-8649-a4b880e9dcca/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.087827 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.101485 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.112497 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.124402 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-g6qb2"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.132393 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-45hfx"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.140113 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xz78"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.147898 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.155705 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.163257 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-clr4m"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.170873 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c5lr4"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.178860 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.185686 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.192490 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w4c7x"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.197550 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fdzds"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.203741 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.211683 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.219429 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.228526 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jjvwd"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.236655 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cldvc"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.243817 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fnhqj"] Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.835443 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37960174-d26b-460f-abd9-934dee1ecc8c" path="/var/lib/kubelet/pods/37960174-d26b-460f-abd9-934dee1ecc8c/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.837288 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="425704dd-e289-42f7-8b10-bd817b279099" path="/var/lib/kubelet/pods/425704dd-e289-42f7-8b10-bd817b279099/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.838045 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a24fab5-51cc-4f0a-a823-c9748efd8410" path="/var/lib/kubelet/pods/5a24fab5-51cc-4f0a-a823-c9748efd8410/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.838749 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9" path="/var/lib/kubelet/pods/79fcc9ce-0ed4-4b7d-9b23-9a55d25349f9/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.844612 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96961a4d-2144-4ca9-852f-f624c591bf50" path="/var/lib/kubelet/pods/96961a4d-2144-4ca9-852f-f624c591bf50/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.845588 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99553aeb-f0fe-47e8-9d2a-64f4b49be76c" path="/var/lib/kubelet/pods/99553aeb-f0fe-47e8-9d2a-64f4b49be76c/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.846433 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7bc661-fee9-41a6-a62e-0af1fc669e85" path="/var/lib/kubelet/pods/9b7bc661-fee9-41a6-a62e-0af1fc669e85/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.848011 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbaecb3-a52c-45c2-aa69-a9eac6ffea63" path="/var/lib/kubelet/pods/cdbaecb3-a52c-45c2-aa69-a9eac6ffea63/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.848690 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce2c78bc-99b3-4deb-871f-923a3a42d5ff" path="/var/lib/kubelet/pods/ce2c78bc-99b3-4deb-871f-923a3a42d5ff/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.849409 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4eb6e0-afff-43a6-af04-0193fa711a9a" path="/var/lib/kubelet/pods/fd4eb6e0-afff-43a6-af04-0193fa711a9a/volumes" Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.952783 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:14:22 crc kubenswrapper[4782]: I0202 11:14:22.953300 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.513158 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw"] Feb 02 11:14:35 crc kubenswrapper[4782]: E0202 11:14:35.513990 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="extract-utilities" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514002 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="extract-utilities" Feb 02 11:14:35 crc kubenswrapper[4782]: E0202 11:14:35.514014 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="extract-content" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514021 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="extract-content" Feb 02 11:14:35 crc kubenswrapper[4782]: E0202 11:14:35.514033 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="registry-server" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514039 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="registry-server" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514179 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98159ff-6334-40e3-8649-a4b880e9dcca" containerName="registry-server" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.514904 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.518859 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.519069 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.519917 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.520172 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.520385 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.531818 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw"] Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571362 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571467 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571517 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571686 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.571880 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.674491 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.674568 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.674748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.674963 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.675265 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.683223 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.683252 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.684321 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.692758 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.696825 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:35 crc kubenswrapper[4782]: I0202 11:14:35.836483 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:36 crc kubenswrapper[4782]: I0202 11:14:36.440316 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw"] Feb 02 11:14:36 crc kubenswrapper[4782]: W0202 11:14:36.441180 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cede59e_7f51_455a_8405_3ae76f40e348.slice/crio-d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82 WatchSource:0}: Error finding container d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82: Status 404 returned error can't find the container with id d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82 Feb 02 11:14:36 crc kubenswrapper[4782]: I0202 11:14:36.453918 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" event={"ID":"6cede59e-7f51-455a-8405-3ae76f40e348","Type":"ContainerStarted","Data":"d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82"} Feb 02 11:14:37 crc kubenswrapper[4782]: I0202 11:14:37.463479 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" event={"ID":"6cede59e-7f51-455a-8405-3ae76f40e348","Type":"ContainerStarted","Data":"9121cc1487cd3f8f2a666f0332657834074163a1fe3c409b901a968edf1ab0b2"} Feb 02 11:14:37 crc kubenswrapper[4782]: I0202 11:14:37.492093 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" podStartSLOduration=1.913352739 podStartE2EDuration="2.492067734s" podCreationTimestamp="2026-02-02 11:14:35 +0000 UTC" firstStartedPulling="2026-02-02 11:14:36.443071696 +0000 UTC m=+2156.327264412" lastFinishedPulling="2026-02-02 11:14:37.021786651 +0000 UTC m=+2156.905979407" observedRunningTime="2026-02-02 11:14:37.476615029 +0000 UTC m=+2157.360807745" watchObservedRunningTime="2026-02-02 11:14:37.492067734 +0000 UTC m=+2157.376260490" Feb 02 11:14:48 crc kubenswrapper[4782]: I0202 11:14:48.545759 4782 generic.go:334] "Generic (PLEG): container finished" podID="6cede59e-7f51-455a-8405-3ae76f40e348" containerID="9121cc1487cd3f8f2a666f0332657834074163a1fe3c409b901a968edf1ab0b2" exitCode=0 Feb 02 11:14:48 crc kubenswrapper[4782]: I0202 11:14:48.545869 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" event={"ID":"6cede59e-7f51-455a-8405-3ae76f40e348","Type":"ContainerDied","Data":"9121cc1487cd3f8f2a666f0332657834074163a1fe3c409b901a968edf1ab0b2"} Feb 02 11:14:49 crc kubenswrapper[4782]: I0202 11:14:49.908479 4782 scope.go:117] "RemoveContainer" containerID="f4762256f4358dc203e66c5c913257fa22830b06b1398f9270a2496bb4594c32" Feb 02 11:14:49 crc kubenswrapper[4782]: I0202 11:14:49.990094 4782 scope.go:117] "RemoveContainer" containerID="a3652484620aa178a685846c99f7de4b05cf1ea0f50bee5c8828a07d27f3b419" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.020382 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.056623 4782 scope.go:117] "RemoveContainer" containerID="47940927ded7a9aac258b8c6a3364ef69283f34e697e95ad52e93cc9f65a9e0c" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153623 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153786 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153834 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153882 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.153918 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") pod \"6cede59e-7f51-455a-8405-3ae76f40e348\" (UID: \"6cede59e-7f51-455a-8405-3ae76f40e348\") " Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.162591 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.164933 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh" (OuterVolumeSpecName: "kube-api-access-wblkh") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "kube-api-access-wblkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.166975 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph" (OuterVolumeSpecName: "ceph") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.184876 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory" (OuterVolumeSpecName: "inventory") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.201781 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6cede59e-7f51-455a-8405-3ae76f40e348" (UID: "6cede59e-7f51-455a-8405-3ae76f40e348"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.215435 4782 scope.go:117] "RemoveContainer" containerID="1918681cf715e6155198ca866454b6e4a0c53baf344cbc2db5089e48edd2cc36" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.246365 4782 scope.go:117] "RemoveContainer" containerID="602d6c4af1ac8198091f7c68ae19fa3196c9ffcd0620c4b0def39e668a4ec792" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255705 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255738 4782 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255759 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255780 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cede59e-7f51-455a-8405-3ae76f40e348-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.255790 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wblkh\" (UniqueName: \"kubernetes.io/projected/6cede59e-7f51-455a-8405-3ae76f40e348-kube-api-access-wblkh\") on node \"crc\" DevicePath \"\"" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.277364 4782 scope.go:117] "RemoveContainer" containerID="65e9d4460cda578d85c98c8eacb6e70446a4235a9df02ce23f87a954cc50ea96" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.338597 4782 scope.go:117] "RemoveContainer" containerID="d704a337ba153cb759a9029666c65419beb8b579a0125a3a01b8036bcaa12955" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.405541 4782 scope.go:117] "RemoveContainer" containerID="0e893f699ea19753909fb2dc54c9a946d6efd297f534f8a2fd10b438cd438ecd" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.468467 4782 scope.go:117] "RemoveContainer" containerID="f5d5891e49acba50900ea7dad6534db383a08dbcaa564c467b692ffae7d6b80a" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.499187 4782 scope.go:117] "RemoveContainer" containerID="1d498ad8d1bfb2287778416cd5cee7768c1031fd2150b0b80bdee8cb15dfaffd" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.566687 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" event={"ID":"6cede59e-7f51-455a-8405-3ae76f40e348","Type":"ContainerDied","Data":"d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82"} Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.566874 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.566987 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.655874 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch"] Feb 02 11:14:50 crc kubenswrapper[4782]: E0202 11:14:50.656277 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cede59e-7f51-455a-8405-3ae76f40e348" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.656297 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cede59e-7f51-455a-8405-3ae76f40e348" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.656479 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cede59e-7f51-455a-8405-3ae76f40e348" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.658932 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662318 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662473 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662539 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662675 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.662782 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.668592 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch"] Feb 02 11:14:50 crc kubenswrapper[4782]: E0202 11:14:50.736817 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cede59e_7f51_455a_8405_3ae76f40e348.slice/crio-d79af58da3c22d1d3f6d27d39e7835f54ff655baa8bc9bcada1e76f2efcc1c82\": RecentStats: unable to find data in memory cache]" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763570 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763722 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763751 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.763784 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.865967 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.866046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.866172 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.866229 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.866271 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.872438 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.873144 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.882671 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.884966 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:50 crc kubenswrapper[4782]: I0202 11:14:50.885724 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:51 crc kubenswrapper[4782]: I0202 11:14:51.064282 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:14:51 crc kubenswrapper[4782]: I0202 11:14:51.623791 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch"] Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.595008 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" event={"ID":"14dddbe2-21a7-417a-8d21-ab97f18aef5d","Type":"ContainerStarted","Data":"69de4465f107269ffd74eebaf2f980c3701cfb9aad1cfbb6b352c4678c7d6844"} Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.595351 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" event={"ID":"14dddbe2-21a7-417a-8d21-ab97f18aef5d","Type":"ContainerStarted","Data":"05f3f00be72c9cbb26f985a3b0b234e1373a612190801d2ad37e870a76ce2098"} Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.613225 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" podStartSLOduration=2.090350802 podStartE2EDuration="2.61320697s" podCreationTimestamp="2026-02-02 11:14:50 +0000 UTC" firstStartedPulling="2026-02-02 11:14:51.630504489 +0000 UTC m=+2171.514697205" lastFinishedPulling="2026-02-02 11:14:52.153360657 +0000 UTC m=+2172.037553373" observedRunningTime="2026-02-02 11:14:52.610507392 +0000 UTC m=+2172.494700108" watchObservedRunningTime="2026-02-02 11:14:52.61320697 +0000 UTC m=+2172.497399686" Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.952197 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:14:52 crc kubenswrapper[4782]: I0202 11:14:52.952255 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.134554 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.141054 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.145244 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.145750 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.152415 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.241055 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.241129 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.241393 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.343885 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.344079 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.344122 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.345297 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.352431 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.366500 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") pod \"collect-profiles-29500515-svbzd\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.476106 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:00 crc kubenswrapper[4782]: I0202 11:15:00.922029 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 11:15:01 crc kubenswrapper[4782]: I0202 11:15:01.670518 4782 generic.go:334] "Generic (PLEG): container finished" podID="49267abf-7f15-4460-bbc4-d7b0cc162817" containerID="f26205a7a090662d7627013616952d20f36db3d708e8b6aa67a214bacd583878" exitCode=0 Feb 02 11:15:01 crc kubenswrapper[4782]: I0202 11:15:01.670675 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" event={"ID":"49267abf-7f15-4460-bbc4-d7b0cc162817","Type":"ContainerDied","Data":"f26205a7a090662d7627013616952d20f36db3d708e8b6aa67a214bacd583878"} Feb 02 11:15:01 crc kubenswrapper[4782]: I0202 11:15:01.671152 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" event={"ID":"49267abf-7f15-4460-bbc4-d7b0cc162817","Type":"ContainerStarted","Data":"e9a23527cd890dbeb14872b2ce5e6d5c6107846f37cf16e9c7d090e1b97c491f"} Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.048756 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.099979 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") pod \"49267abf-7f15-4460-bbc4-d7b0cc162817\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.100349 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") pod \"49267abf-7f15-4460-bbc4-d7b0cc162817\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.100978 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") pod \"49267abf-7f15-4460-bbc4-d7b0cc162817\" (UID: \"49267abf-7f15-4460-bbc4-d7b0cc162817\") " Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.101633 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume" (OuterVolumeSpecName: "config-volume") pod "49267abf-7f15-4460-bbc4-d7b0cc162817" (UID: "49267abf-7f15-4460-bbc4-d7b0cc162817"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.102018 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49267abf-7f15-4460-bbc4-d7b0cc162817-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.117452 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49267abf-7f15-4460-bbc4-d7b0cc162817" (UID: "49267abf-7f15-4460-bbc4-d7b0cc162817"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.117543 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc" (OuterVolumeSpecName: "kube-api-access-sxdvc") pod "49267abf-7f15-4460-bbc4-d7b0cc162817" (UID: "49267abf-7f15-4460-bbc4-d7b0cc162817"). InnerVolumeSpecName "kube-api-access-sxdvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.203441 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49267abf-7f15-4460-bbc4-d7b0cc162817-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.203493 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxdvc\" (UniqueName: \"kubernetes.io/projected/49267abf-7f15-4460-bbc4-d7b0cc162817-kube-api-access-sxdvc\") on node \"crc\" DevicePath \"\"" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.686431 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" event={"ID":"49267abf-7f15-4460-bbc4-d7b0cc162817","Type":"ContainerDied","Data":"e9a23527cd890dbeb14872b2ce5e6d5c6107846f37cf16e9c7d090e1b97c491f"} Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.686465 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd" Feb 02 11:15:03 crc kubenswrapper[4782]: I0202 11:15:03.686475 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a23527cd890dbeb14872b2ce5e6d5c6107846f37cf16e9c7d090e1b97c491f" Feb 02 11:15:04 crc kubenswrapper[4782]: I0202 11:15:04.143223 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 11:15:04 crc kubenswrapper[4782]: I0202 11:15:04.152312 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500470-wxc6r"] Feb 02 11:15:04 crc kubenswrapper[4782]: I0202 11:15:04.831974 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9832aa65-d498-4a21-b53a-ebc591328a00" path="/var/lib/kubelet/pods/9832aa65-d498-4a21-b53a-ebc591328a00/volumes" Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.951541 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.952238 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.952293 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.953657 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:15:22 crc kubenswrapper[4782]: I0202 11:15:22.953725 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296" gracePeriod=600 Feb 02 11:15:23 crc kubenswrapper[4782]: I0202 11:15:23.837960 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296" exitCode=0 Feb 02 11:15:23 crc kubenswrapper[4782]: I0202 11:15:23.838036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296"} Feb 02 11:15:23 crc kubenswrapper[4782]: I0202 11:15:23.838304 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9"} Feb 02 11:15:23 crc kubenswrapper[4782]: I0202 11:15:23.838325 4782 scope.go:117] "RemoveContainer" containerID="5bd9469df7c42cfd147763cb8f1b67e82d85e708d8dde6eea1a93320f7dbc9c8" Feb 02 11:15:50 crc kubenswrapper[4782]: I0202 11:15:50.813801 4782 scope.go:117] "RemoveContainer" containerID="b85748eff3923d08bc6d620f725d7b018256a0e4610871950b9aeb66eccc2539" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.866247 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:18 crc kubenswrapper[4782]: E0202 11:16:18.867242 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49267abf-7f15-4460-bbc4-d7b0cc162817" containerName="collect-profiles" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.867258 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="49267abf-7f15-4460-bbc4-d7b0cc162817" containerName="collect-profiles" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.867500 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="49267abf-7f15-4460-bbc4-d7b0cc162817" containerName="collect-profiles" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.869042 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.882934 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.909752 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.910085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:18 crc kubenswrapper[4782]: I0202 11:16:18.910290 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.011723 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.011812 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.011915 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.012290 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.012487 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.031764 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") pod \"certified-operators-dkcrj\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.186839 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:19 crc kubenswrapper[4782]: I0202 11:16:19.562663 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:20 crc kubenswrapper[4782]: I0202 11:16:20.305459 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerID="4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687" exitCode=0 Feb 02 11:16:20 crc kubenswrapper[4782]: I0202 11:16:20.305520 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerDied","Data":"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687"} Feb 02 11:16:20 crc kubenswrapper[4782]: I0202 11:16:20.306125 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerStarted","Data":"ecf6ff51b42e9000b6e16920f1a53fa2716549df5b3034607af134a5eda026c3"} Feb 02 11:16:21 crc kubenswrapper[4782]: I0202 11:16:21.315266 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerStarted","Data":"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206"} Feb 02 11:16:22 crc kubenswrapper[4782]: I0202 11:16:22.323814 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerID="ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206" exitCode=0 Feb 02 11:16:22 crc kubenswrapper[4782]: I0202 11:16:22.323852 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerDied","Data":"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206"} Feb 02 11:16:23 crc kubenswrapper[4782]: I0202 11:16:23.333426 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerStarted","Data":"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387"} Feb 02 11:16:23 crc kubenswrapper[4782]: I0202 11:16:23.356078 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dkcrj" podStartSLOduration=2.938981054 podStartE2EDuration="5.356059331s" podCreationTimestamp="2026-02-02 11:16:18 +0000 UTC" firstStartedPulling="2026-02-02 11:16:20.307665073 +0000 UTC m=+2260.191857789" lastFinishedPulling="2026-02-02 11:16:22.72474335 +0000 UTC m=+2262.608936066" observedRunningTime="2026-02-02 11:16:23.354842456 +0000 UTC m=+2263.239035192" watchObservedRunningTime="2026-02-02 11:16:23.356059331 +0000 UTC m=+2263.240252057" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.250889 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.252875 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.275341 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.308224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.308350 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.308460 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.409961 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.410062 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.410121 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.410506 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.410566 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.439286 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") pod \"redhat-marketplace-kb292\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:24 crc kubenswrapper[4782]: I0202 11:16:24.578248 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:25 crc kubenswrapper[4782]: I0202 11:16:25.067342 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:25 crc kubenswrapper[4782]: W0202 11:16:25.080290 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8fb06dc_1bfc_4b37_a62e_9ebe2b22ae27.slice/crio-211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a WatchSource:0}: Error finding container 211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a: Status 404 returned error can't find the container with id 211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a Feb 02 11:16:25 crc kubenswrapper[4782]: I0202 11:16:25.352046 4782 generic.go:334] "Generic (PLEG): container finished" podID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerID="1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3" exitCode=0 Feb 02 11:16:25 crc kubenswrapper[4782]: I0202 11:16:25.352119 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerDied","Data":"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3"} Feb 02 11:16:25 crc kubenswrapper[4782]: I0202 11:16:25.352324 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerStarted","Data":"211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a"} Feb 02 11:16:27 crc kubenswrapper[4782]: I0202 11:16:27.381405 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerStarted","Data":"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1"} Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.189870 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.190171 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.237240 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.397863 4782 generic.go:334] "Generic (PLEG): container finished" podID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerID="10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1" exitCode=0 Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.397929 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerDied","Data":"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1"} Feb 02 11:16:29 crc kubenswrapper[4782]: I0202 11:16:29.443726 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:30 crc kubenswrapper[4782]: I0202 11:16:30.408305 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerStarted","Data":"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776"} Feb 02 11:16:30 crc kubenswrapper[4782]: I0202 11:16:30.426529 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kb292" podStartSLOduration=1.764434789 podStartE2EDuration="6.426504239s" podCreationTimestamp="2026-02-02 11:16:24 +0000 UTC" firstStartedPulling="2026-02-02 11:16:25.353979078 +0000 UTC m=+2265.238171794" lastFinishedPulling="2026-02-02 11:16:30.016048528 +0000 UTC m=+2269.900241244" observedRunningTime="2026-02-02 11:16:30.42410551 +0000 UTC m=+2270.308298226" watchObservedRunningTime="2026-02-02 11:16:30.426504239 +0000 UTC m=+2270.310696955" Feb 02 11:16:31 crc kubenswrapper[4782]: I0202 11:16:31.838882 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:31 crc kubenswrapper[4782]: I0202 11:16:31.839123 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dkcrj" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="registry-server" containerID="cri-o://49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" gracePeriod=2 Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.297673 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.310345 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") pod \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.310573 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") pod \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.310621 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") pod \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\" (UID: \"f4ca49fd-6a65-44d2-9733-5dd64b0c0552\") " Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.311287 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities" (OuterVolumeSpecName: "utilities") pod "f4ca49fd-6a65-44d2-9733-5dd64b0c0552" (UID: "f4ca49fd-6a65-44d2-9733-5dd64b0c0552"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.311837 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.317337 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck" (OuterVolumeSpecName: "kube-api-access-vftck") pod "f4ca49fd-6a65-44d2-9733-5dd64b0c0552" (UID: "f4ca49fd-6a65-44d2-9733-5dd64b0c0552"). InnerVolumeSpecName "kube-api-access-vftck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.361737 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4ca49fd-6a65-44d2-9733-5dd64b0c0552" (UID: "f4ca49fd-6a65-44d2-9733-5dd64b0c0552"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.413835 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.413874 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vftck\" (UniqueName: \"kubernetes.io/projected/f4ca49fd-6a65-44d2-9733-5dd64b0c0552-kube-api-access-vftck\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.424843 4782 generic.go:334] "Generic (PLEG): container finished" podID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerID="49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" exitCode=0 Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.424892 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerDied","Data":"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387"} Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.424917 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkcrj" event={"ID":"f4ca49fd-6a65-44d2-9733-5dd64b0c0552","Type":"ContainerDied","Data":"ecf6ff51b42e9000b6e16920f1a53fa2716549df5b3034607af134a5eda026c3"} Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.424936 4782 scope.go:117] "RemoveContainer" containerID="49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.425053 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkcrj" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.448147 4782 scope.go:117] "RemoveContainer" containerID="ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.472743 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.473656 4782 scope.go:117] "RemoveContainer" containerID="4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.480821 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dkcrj"] Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.507945 4782 scope.go:117] "RemoveContainer" containerID="49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" Feb 02 11:16:32 crc kubenswrapper[4782]: E0202 11:16:32.509071 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387\": container with ID starting with 49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387 not found: ID does not exist" containerID="49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.509101 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387"} err="failed to get container status \"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387\": rpc error: code = NotFound desc = could not find container \"49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387\": container with ID starting with 49b5f2559cd342427760f279a69c091b47cd7b6f480df8aa413c3c83e47da387 not found: ID does not exist" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.509123 4782 scope.go:117] "RemoveContainer" containerID="ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206" Feb 02 11:16:32 crc kubenswrapper[4782]: E0202 11:16:32.509549 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206\": container with ID starting with ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206 not found: ID does not exist" containerID="ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.509680 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206"} err="failed to get container status \"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206\": rpc error: code = NotFound desc = could not find container \"ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206\": container with ID starting with ad99de29ccd5018e67fbf1330282bc47fc6012014c1c2488c5771de79ec85206 not found: ID does not exist" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.509790 4782 scope.go:117] "RemoveContainer" containerID="4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687" Feb 02 11:16:32 crc kubenswrapper[4782]: E0202 11:16:32.511266 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687\": container with ID starting with 4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687 not found: ID does not exist" containerID="4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.511366 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687"} err="failed to get container status \"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687\": rpc error: code = NotFound desc = could not find container \"4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687\": container with ID starting with 4f355a8d92860281a248871c0f6b7abf61144baddbf5b5f94fa5faf5e7f80687 not found: ID does not exist" Feb 02 11:16:32 crc kubenswrapper[4782]: I0202 11:16:32.830402 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" path="/var/lib/kubelet/pods/f4ca49fd-6a65-44d2-9733-5dd64b0c0552/volumes" Feb 02 11:16:33 crc kubenswrapper[4782]: I0202 11:16:33.438879 4782 generic.go:334] "Generic (PLEG): container finished" podID="14dddbe2-21a7-417a-8d21-ab97f18aef5d" containerID="69de4465f107269ffd74eebaf2f980c3701cfb9aad1cfbb6b352c4678c7d6844" exitCode=0 Feb 02 11:16:33 crc kubenswrapper[4782]: I0202 11:16:33.438955 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" event={"ID":"14dddbe2-21a7-417a-8d21-ab97f18aef5d","Type":"ContainerDied","Data":"69de4465f107269ffd74eebaf2f980c3701cfb9aad1cfbb6b352c4678c7d6844"} Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.579846 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.580118 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.628228 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.822089 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.874874 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.875056 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.875432 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.875463 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.875489 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") pod \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\" (UID: \"14dddbe2-21a7-417a-8d21-ab97f18aef5d\") " Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.881843 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq" (OuterVolumeSpecName: "kube-api-access-hggwq") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "kube-api-access-hggwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.882097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph" (OuterVolumeSpecName: "ceph") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.885831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.900738 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory" (OuterVolumeSpecName: "inventory") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.906256 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14dddbe2-21a7-417a-8d21-ab97f18aef5d" (UID: "14dddbe2-21a7-417a-8d21-ab97f18aef5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977415 4782 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977457 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977470 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977483 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/14dddbe2-21a7-417a-8d21-ab97f18aef5d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:34 crc kubenswrapper[4782]: I0202 11:16:34.977494 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hggwq\" (UniqueName: \"kubernetes.io/projected/14dddbe2-21a7-417a-8d21-ab97f18aef5d-kube-api-access-hggwq\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.455236 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" event={"ID":"14dddbe2-21a7-417a-8d21-ab97f18aef5d","Type":"ContainerDied","Data":"05f3f00be72c9cbb26f985a3b0b234e1373a612190801d2ad37e870a76ce2098"} Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.455286 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f3f00be72c9cbb26f985a3b0b234e1373a612190801d2ad37e870a76ce2098" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.455259 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.503231 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.559522 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf"] Feb 02 11:16:35 crc kubenswrapper[4782]: E0202 11:16:35.560530 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14dddbe2-21a7-417a-8d21-ab97f18aef5d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.560628 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="14dddbe2-21a7-417a-8d21-ab97f18aef5d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:16:35 crc kubenswrapper[4782]: E0202 11:16:35.560870 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="extract-content" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.560945 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="extract-content" Feb 02 11:16:35 crc kubenswrapper[4782]: E0202 11:16:35.561224 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="registry-server" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.561431 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="registry-server" Feb 02 11:16:35 crc kubenswrapper[4782]: E0202 11:16:35.561521 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="extract-utilities" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.561586 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="extract-utilities" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.561906 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="14dddbe2-21a7-417a-8d21-ab97f18aef5d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.562012 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ca49fd-6a65-44d2-9733-5dd64b0c0552" containerName="registry-server" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.562704 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.565047 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.565375 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.565516 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.565690 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.567012 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.572538 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf"] Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.595153 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.595241 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.595268 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.595305 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.697691 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.697858 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.697909 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.698039 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.702230 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.702276 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.703691 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.726968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:35 crc kubenswrapper[4782]: I0202 11:16:35.895415 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:16:36 crc kubenswrapper[4782]: I0202 11:16:36.406957 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf"] Feb 02 11:16:36 crc kubenswrapper[4782]: I0202 11:16:36.464870 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" event={"ID":"23a1d5dc-9cfd-4c8a-8534-db3075d99574","Type":"ContainerStarted","Data":"197e8b1584d4103966d35029a15423b0fb273cdc870eef66ede372a94ca19367"} Feb 02 11:16:36 crc kubenswrapper[4782]: I0202 11:16:36.642207 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.472545 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kb292" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="registry-server" containerID="cri-o://d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" gracePeriod=2 Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.473714 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" event={"ID":"23a1d5dc-9cfd-4c8a-8534-db3075d99574","Type":"ContainerStarted","Data":"36de3889878e544ddd04f0431bda4177dd543f991b4c87a9741668b0c02aa32c"} Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.501519 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" podStartSLOduration=1.953761903 podStartE2EDuration="2.501496777s" podCreationTimestamp="2026-02-02 11:16:35 +0000 UTC" firstStartedPulling="2026-02-02 11:16:36.417379106 +0000 UTC m=+2276.301571822" lastFinishedPulling="2026-02-02 11:16:36.96511398 +0000 UTC m=+2276.849306696" observedRunningTime="2026-02-02 11:16:37.489185963 +0000 UTC m=+2277.373378689" watchObservedRunningTime="2026-02-02 11:16:37.501496777 +0000 UTC m=+2277.385689513" Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.874245 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.946896 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") pod \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.946942 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") pod \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.947119 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") pod \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\" (UID: \"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27\") " Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.948084 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities" (OuterVolumeSpecName: "utilities") pod "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" (UID: "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.952910 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g" (OuterVolumeSpecName: "kube-api-access-nhd8g") pod "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" (UID: "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27"). InnerVolumeSpecName "kube-api-access-nhd8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:16:37 crc kubenswrapper[4782]: I0202 11:16:37.972305 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" (UID: "b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.049242 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.049273 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.049284 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhd8g\" (UniqueName: \"kubernetes.io/projected/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27-kube-api-access-nhd8g\") on node \"crc\" DevicePath \"\"" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.483892 4782 generic.go:334] "Generic (PLEG): container finished" podID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerID="d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" exitCode=0 Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.483977 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb292" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.483965 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerDied","Data":"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776"} Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.484919 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb292" event={"ID":"b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27","Type":"ContainerDied","Data":"211d97d65ac8659a00ee63732b70972188a676dc2daaeef83138d1cd8953071a"} Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.484964 4782 scope.go:117] "RemoveContainer" containerID="d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.514856 4782 scope.go:117] "RemoveContainer" containerID="10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.525319 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.536919 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb292"] Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.547745 4782 scope.go:117] "RemoveContainer" containerID="1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.581263 4782 scope.go:117] "RemoveContainer" containerID="d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" Feb 02 11:16:38 crc kubenswrapper[4782]: E0202 11:16:38.581872 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776\": container with ID starting with d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776 not found: ID does not exist" containerID="d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.581925 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776"} err="failed to get container status \"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776\": rpc error: code = NotFound desc = could not find container \"d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776\": container with ID starting with d4e0726a5e92507bf8913353b75228f8f0ac8a4ed09f855b53071bd84be71776 not found: ID does not exist" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.581964 4782 scope.go:117] "RemoveContainer" containerID="10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1" Feb 02 11:16:38 crc kubenswrapper[4782]: E0202 11:16:38.582257 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1\": container with ID starting with 10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1 not found: ID does not exist" containerID="10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.582286 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1"} err="failed to get container status \"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1\": rpc error: code = NotFound desc = could not find container \"10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1\": container with ID starting with 10d257af7ef5633fca1f0130f935da3e13f4f6e84f94aa3b870cda01974c2dd1 not found: ID does not exist" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.582304 4782 scope.go:117] "RemoveContainer" containerID="1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3" Feb 02 11:16:38 crc kubenswrapper[4782]: E0202 11:16:38.582651 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3\": container with ID starting with 1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3 not found: ID does not exist" containerID="1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.582689 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3"} err="failed to get container status \"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3\": rpc error: code = NotFound desc = could not find container \"1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3\": container with ID starting with 1f51c3c59acdbb660f654f6af48489b0faf167956c07ddeabe9fd2585482b2e3 not found: ID does not exist" Feb 02 11:16:38 crc kubenswrapper[4782]: I0202 11:16:38.833092 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" path="/var/lib/kubelet/pods/b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27/volumes" Feb 02 11:17:02 crc kubenswrapper[4782]: I0202 11:17:02.702927 4782 generic.go:334] "Generic (PLEG): container finished" podID="23a1d5dc-9cfd-4c8a-8534-db3075d99574" containerID="36de3889878e544ddd04f0431bda4177dd543f991b4c87a9741668b0c02aa32c" exitCode=0 Feb 02 11:17:02 crc kubenswrapper[4782]: I0202 11:17:02.702998 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" event={"ID":"23a1d5dc-9cfd-4c8a-8534-db3075d99574","Type":"ContainerDied","Data":"36de3889878e544ddd04f0431bda4177dd543f991b4c87a9741668b0c02aa32c"} Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.138946 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.228983 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") pod \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.229156 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") pod \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.229180 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") pod \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.229217 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") pod \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\" (UID: \"23a1d5dc-9cfd-4c8a-8534-db3075d99574\") " Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.235993 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph" (OuterVolumeSpecName: "ceph") pod "23a1d5dc-9cfd-4c8a-8534-db3075d99574" (UID: "23a1d5dc-9cfd-4c8a-8534-db3075d99574"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.239065 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85" (OuterVolumeSpecName: "kube-api-access-4dt85") pod "23a1d5dc-9cfd-4c8a-8534-db3075d99574" (UID: "23a1d5dc-9cfd-4c8a-8534-db3075d99574"). InnerVolumeSpecName "kube-api-access-4dt85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.263521 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "23a1d5dc-9cfd-4c8a-8534-db3075d99574" (UID: "23a1d5dc-9cfd-4c8a-8534-db3075d99574"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.277417 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory" (OuterVolumeSpecName: "inventory") pod "23a1d5dc-9cfd-4c8a-8534-db3075d99574" (UID: "23a1d5dc-9cfd-4c8a-8534-db3075d99574"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.330894 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dt85\" (UniqueName: \"kubernetes.io/projected/23a1d5dc-9cfd-4c8a-8534-db3075d99574-kube-api-access-4dt85\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.331137 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.331234 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.331301 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23a1d5dc-9cfd-4c8a-8534-db3075d99574-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.718770 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" event={"ID":"23a1d5dc-9cfd-4c8a-8534-db3075d99574","Type":"ContainerDied","Data":"197e8b1584d4103966d35029a15423b0fb273cdc870eef66ede372a94ca19367"} Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.719047 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="197e8b1584d4103966d35029a15423b0fb273cdc870eef66ede372a94ca19367" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.718860 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909272 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr"] Feb 02 11:17:04 crc kubenswrapper[4782]: E0202 11:17:04.909735 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="extract-utilities" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909756 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="extract-utilities" Feb 02 11:17:04 crc kubenswrapper[4782]: E0202 11:17:04.909787 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="extract-content" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909797 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="extract-content" Feb 02 11:17:04 crc kubenswrapper[4782]: E0202 11:17:04.909809 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a1d5dc-9cfd-4c8a-8534-db3075d99574" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909821 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a1d5dc-9cfd-4c8a-8534-db3075d99574" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:04 crc kubenswrapper[4782]: E0202 11:17:04.909841 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="registry-server" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.909849 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="registry-server" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.910073 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a1d5dc-9cfd-4c8a-8534-db3075d99574" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.910103 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fb06dc-1bfc-4b37-a62e-9ebe2b22ae27" containerName="registry-server" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.910926 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.913980 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.915190 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.915220 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.915441 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.915471 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:17:04 crc kubenswrapper[4782]: I0202 11:17:04.924606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr"] Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.043569 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.043892 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.044236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.044465 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.145920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.146012 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.146081 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.146139 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.150262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.150599 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.153153 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.166360 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.226751 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:05 crc kubenswrapper[4782]: I0202 11:17:05.749285 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr"] Feb 02 11:17:06 crc kubenswrapper[4782]: I0202 11:17:06.733435 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" event={"ID":"03fa384d-760c-4c0a-b58f-91a876eeb3d7","Type":"ContainerStarted","Data":"2235833b85f2f95e8bedc81df9f5556debf3ebc3dc20a3681176ccdf6b9e1069"} Feb 02 11:17:06 crc kubenswrapper[4782]: I0202 11:17:06.734583 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" event={"ID":"03fa384d-760c-4c0a-b58f-91a876eeb3d7","Type":"ContainerStarted","Data":"02f3d80c5161766ab9f2a2d37c095bb90fada2ff3f2d917297117ac92d45b513"} Feb 02 11:17:06 crc kubenswrapper[4782]: I0202 11:17:06.752007 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" podStartSLOduration=2.170299413 podStartE2EDuration="2.751984765s" podCreationTimestamp="2026-02-02 11:17:04 +0000 UTC" firstStartedPulling="2026-02-02 11:17:05.755726954 +0000 UTC m=+2305.639919670" lastFinishedPulling="2026-02-02 11:17:06.337412306 +0000 UTC m=+2306.221605022" observedRunningTime="2026-02-02 11:17:06.749572536 +0000 UTC m=+2306.633765252" watchObservedRunningTime="2026-02-02 11:17:06.751984765 +0000 UTC m=+2306.636177481" Feb 02 11:17:11 crc kubenswrapper[4782]: I0202 11:17:11.770881 4782 generic.go:334] "Generic (PLEG): container finished" podID="03fa384d-760c-4c0a-b58f-91a876eeb3d7" containerID="2235833b85f2f95e8bedc81df9f5556debf3ebc3dc20a3681176ccdf6b9e1069" exitCode=0 Feb 02 11:17:11 crc kubenswrapper[4782]: I0202 11:17:11.771134 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" event={"ID":"03fa384d-760c-4c0a-b58f-91a876eeb3d7","Type":"ContainerDied","Data":"2235833b85f2f95e8bedc81df9f5556debf3ebc3dc20a3681176ccdf6b9e1069"} Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.172722 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.294579 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") pod \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.294676 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") pod \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.294774 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") pod \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.294836 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") pod \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\" (UID: \"03fa384d-760c-4c0a-b58f-91a876eeb3d7\") " Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.300428 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph" (OuterVolumeSpecName: "ceph") pod "03fa384d-760c-4c0a-b58f-91a876eeb3d7" (UID: "03fa384d-760c-4c0a-b58f-91a876eeb3d7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.300522 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng" (OuterVolumeSpecName: "kube-api-access-7gzng") pod "03fa384d-760c-4c0a-b58f-91a876eeb3d7" (UID: "03fa384d-760c-4c0a-b58f-91a876eeb3d7"). InnerVolumeSpecName "kube-api-access-7gzng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.321722 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory" (OuterVolumeSpecName: "inventory") pod "03fa384d-760c-4c0a-b58f-91a876eeb3d7" (UID: "03fa384d-760c-4c0a-b58f-91a876eeb3d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.329083 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03fa384d-760c-4c0a-b58f-91a876eeb3d7" (UID: "03fa384d-760c-4c0a-b58f-91a876eeb3d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.398006 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.398036 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.398053 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/03fa384d-760c-4c0a-b58f-91a876eeb3d7-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.398065 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gzng\" (UniqueName: \"kubernetes.io/projected/03fa384d-760c-4c0a-b58f-91a876eeb3d7-kube-api-access-7gzng\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.787664 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" event={"ID":"03fa384d-760c-4c0a-b58f-91a876eeb3d7","Type":"ContainerDied","Data":"02f3d80c5161766ab9f2a2d37c095bb90fada2ff3f2d917297117ac92d45b513"} Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.787702 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f3d80c5161766ab9f2a2d37c095bb90fada2ff3f2d917297117ac92d45b513" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.787726 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.912364 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png"] Feb 02 11:17:13 crc kubenswrapper[4782]: E0202 11:17:13.912778 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fa384d-760c-4c0a-b58f-91a876eeb3d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.912794 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fa384d-760c-4c0a-b58f-91a876eeb3d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.912945 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="03fa384d-760c-4c0a-b58f-91a876eeb3d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.915465 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919206 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919230 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919353 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919416 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.919571 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:17:13 crc kubenswrapper[4782]: I0202 11:17:13.927096 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png"] Feb 02 11:17:13 crc kubenswrapper[4782]: E0202 11:17:13.975802 4782 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03fa384d_760c_4c0a_b58f_91a876eeb3d7.slice/crio-02f3d80c5161766ab9f2a2d37c095bb90fada2ff3f2d917297117ac92d45b513\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03fa384d_760c_4c0a_b58f_91a876eeb3d7.slice\": RecentStats: unable to find data in memory cache]" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.011257 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.012604 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.012872 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.013084 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.115282 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.115668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.115865 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.116108 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.122566 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.123161 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.125125 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.144454 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h4png\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.240151 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.761619 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png"] Feb 02 11:17:14 crc kubenswrapper[4782]: W0202 11:17:14.769603 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbf31fe9_54a8_4cc8_b0ef_a8076cf87c52.slice/crio-7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b WatchSource:0}: Error finding container 7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b: Status 404 returned error can't find the container with id 7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b Feb 02 11:17:14 crc kubenswrapper[4782]: I0202 11:17:14.796222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" event={"ID":"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52","Type":"ContainerStarted","Data":"7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b"} Feb 02 11:17:15 crc kubenswrapper[4782]: I0202 11:17:15.806588 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" event={"ID":"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52","Type":"ContainerStarted","Data":"199f5b114440d957ca82b1b3791c3a0c06061529752ed8afc79d07dd12184ea0"} Feb 02 11:17:15 crc kubenswrapper[4782]: I0202 11:17:15.829688 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" podStartSLOduration=2.045590767 podStartE2EDuration="2.829673258s" podCreationTimestamp="2026-02-02 11:17:13 +0000 UTC" firstStartedPulling="2026-02-02 11:17:14.771810043 +0000 UTC m=+2314.656002759" lastFinishedPulling="2026-02-02 11:17:15.555892534 +0000 UTC m=+2315.440085250" observedRunningTime="2026-02-02 11:17:15.824791528 +0000 UTC m=+2315.708984254" watchObservedRunningTime="2026-02-02 11:17:15.829673258 +0000 UTC m=+2315.713865974" Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.831692 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.833784 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.846392 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.954701 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.954814 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:26 crc kubenswrapper[4782]: I0202 11:17:26.954927 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.056359 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.056428 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.056487 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.057148 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.057229 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.076775 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") pod \"community-operators-p8frt\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.154480 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.806677 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:27 crc kubenswrapper[4782]: I0202 11:17:27.913889 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerStarted","Data":"a8fcd57b87696b80c8320fe55d90ef869a6fdc47f8b986887188a39b5ad2620b"} Feb 02 11:17:28 crc kubenswrapper[4782]: I0202 11:17:28.924468 4782 generic.go:334] "Generic (PLEG): container finished" podID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerID="8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3" exitCode=0 Feb 02 11:17:28 crc kubenswrapper[4782]: I0202 11:17:28.924526 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerDied","Data":"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3"} Feb 02 11:17:29 crc kubenswrapper[4782]: I0202 11:17:29.933810 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerStarted","Data":"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531"} Feb 02 11:17:31 crc kubenswrapper[4782]: I0202 11:17:31.949118 4782 generic.go:334] "Generic (PLEG): container finished" podID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerID="8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531" exitCode=0 Feb 02 11:17:31 crc kubenswrapper[4782]: I0202 11:17:31.949237 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerDied","Data":"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531"} Feb 02 11:17:32 crc kubenswrapper[4782]: I0202 11:17:32.962697 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerStarted","Data":"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b"} Feb 02 11:17:32 crc kubenswrapper[4782]: I0202 11:17:32.991737 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p8frt" podStartSLOduration=3.31381119 podStartE2EDuration="6.991718038s" podCreationTimestamp="2026-02-02 11:17:26 +0000 UTC" firstStartedPulling="2026-02-02 11:17:28.926221917 +0000 UTC m=+2328.810414633" lastFinishedPulling="2026-02-02 11:17:32.604128765 +0000 UTC m=+2332.488321481" observedRunningTime="2026-02-02 11:17:32.98417687 +0000 UTC m=+2332.868369606" watchObservedRunningTime="2026-02-02 11:17:32.991718038 +0000 UTC m=+2332.875910754" Feb 02 11:17:37 crc kubenswrapper[4782]: I0202 11:17:37.154854 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:37 crc kubenswrapper[4782]: I0202 11:17:37.155463 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:37 crc kubenswrapper[4782]: I0202 11:17:37.198036 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:38 crc kubenswrapper[4782]: I0202 11:17:38.050822 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:38 crc kubenswrapper[4782]: I0202 11:17:38.100290 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.017270 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p8frt" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="registry-server" containerID="cri-o://aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" gracePeriod=2 Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.502161 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.611532 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") pod \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.611729 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") pod \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.611772 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") pod \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\" (UID: \"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb\") " Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.615285 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities" (OuterVolumeSpecName: "utilities") pod "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" (UID: "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.618788 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t" (OuterVolumeSpecName: "kube-api-access-d567t") pod "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" (UID: "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb"). InnerVolumeSpecName "kube-api-access-d567t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.678407 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" (UID: "d4ca0ce7-81d0-44a7-be69-efa0fde3cffb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.713878 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.713916 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:40 crc kubenswrapper[4782]: I0202 11:17:40.713928 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d567t\" (UniqueName: \"kubernetes.io/projected/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb-kube-api-access-d567t\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.048936 4782 generic.go:334] "Generic (PLEG): container finished" podID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerID="aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" exitCode=0 Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.049249 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerDied","Data":"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b"} Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.049282 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8frt" event={"ID":"d4ca0ce7-81d0-44a7-be69-efa0fde3cffb","Type":"ContainerDied","Data":"a8fcd57b87696b80c8320fe55d90ef869a6fdc47f8b986887188a39b5ad2620b"} Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.049302 4782 scope.go:117] "RemoveContainer" containerID="aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.049501 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8frt" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.085463 4782 scope.go:117] "RemoveContainer" containerID="8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.087803 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.103694 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p8frt"] Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.112514 4782 scope.go:117] "RemoveContainer" containerID="8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.152739 4782 scope.go:117] "RemoveContainer" containerID="aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" Feb 02 11:17:41 crc kubenswrapper[4782]: E0202 11:17:41.153207 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b\": container with ID starting with aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b not found: ID does not exist" containerID="aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.153238 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b"} err="failed to get container status \"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b\": rpc error: code = NotFound desc = could not find container \"aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b\": container with ID starting with aac5d07fd3912cd99ae8bc48480861e7b4fb9c59067a3335d51d59103066fe5b not found: ID does not exist" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.153264 4782 scope.go:117] "RemoveContainer" containerID="8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531" Feb 02 11:17:41 crc kubenswrapper[4782]: E0202 11:17:41.153683 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531\": container with ID starting with 8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531 not found: ID does not exist" containerID="8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.153702 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531"} err="failed to get container status \"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531\": rpc error: code = NotFound desc = could not find container \"8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531\": container with ID starting with 8e8f8bc3a92137809cb963dd4b74adc32f7e3101f4d8dc5e3a8133f538592531 not found: ID does not exist" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.153718 4782 scope.go:117] "RemoveContainer" containerID="8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3" Feb 02 11:17:41 crc kubenswrapper[4782]: E0202 11:17:41.153993 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3\": container with ID starting with 8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3 not found: ID does not exist" containerID="8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3" Feb 02 11:17:41 crc kubenswrapper[4782]: I0202 11:17:41.154012 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3"} err="failed to get container status \"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3\": rpc error: code = NotFound desc = could not find container \"8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3\": container with ID starting with 8c24dbebb463b7ba800800d1c186bd2b5ceb0ff73b1c6bc1bddef1ea6c82e5a3 not found: ID does not exist" Feb 02 11:17:42 crc kubenswrapper[4782]: I0202 11:17:42.833996 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" path="/var/lib/kubelet/pods/d4ca0ce7-81d0-44a7-be69-efa0fde3cffb/volumes" Feb 02 11:17:52 crc kubenswrapper[4782]: I0202 11:17:52.951209 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:17:52 crc kubenswrapper[4782]: I0202 11:17:52.951773 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:17:54 crc kubenswrapper[4782]: I0202 11:17:54.158587 4782 generic.go:334] "Generic (PLEG): container finished" podID="fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" containerID="199f5b114440d957ca82b1b3791c3a0c06061529752ed8afc79d07dd12184ea0" exitCode=0 Feb 02 11:17:54 crc kubenswrapper[4782]: I0202 11:17:54.158630 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" event={"ID":"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52","Type":"ContainerDied","Data":"199f5b114440d957ca82b1b3791c3a0c06061529752ed8afc79d07dd12184ea0"} Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.555911 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.738873 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") pod \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.739250 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") pod \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.739275 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") pod \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.739313 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") pod \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\" (UID: \"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52\") " Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.757136 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph" (OuterVolumeSpecName: "ceph") pod "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" (UID: "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.757184 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc" (OuterVolumeSpecName: "kube-api-access-5zlwc") pod "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" (UID: "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52"). InnerVolumeSpecName "kube-api-access-5zlwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.765283 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory" (OuterVolumeSpecName: "inventory") pod "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" (UID: "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.767999 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" (UID: "fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.841586 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.841620 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.841631 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:55 crc kubenswrapper[4782]: I0202 11:17:55.841728 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52-kube-api-access-5zlwc\") on node \"crc\" DevicePath \"\"" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.174745 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" event={"ID":"fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52","Type":"ContainerDied","Data":"7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b"} Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.174784 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f480997ce7a2aabd74347d417056d5c313af8aadf9a41f33fe17fad0ecdde1b" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.174790 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h4png" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.275934 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529"] Feb 02 11:17:56 crc kubenswrapper[4782]: E0202 11:17:56.276288 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276303 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:56 crc kubenswrapper[4782]: E0202 11:17:56.276318 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="extract-content" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276325 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="extract-content" Feb 02 11:17:56 crc kubenswrapper[4782]: E0202 11:17:56.276376 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="extract-utilities" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276383 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="extract-utilities" Feb 02 11:17:56 crc kubenswrapper[4782]: E0202 11:17:56.276394 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="registry-server" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276401 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="registry-server" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276577 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ca0ce7-81d0-44a7-be69-efa0fde3cffb" containerName="registry-server" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.276592 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.277214 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.280156 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.280780 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.281004 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.281075 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.281383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.294557 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529"] Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.451663 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.451739 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.451923 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.451975 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.553718 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.553787 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.553836 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.553863 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.566300 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.566299 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.566779 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.572352 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:56 crc kubenswrapper[4782]: I0202 11:17:56.597344 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:17:57 crc kubenswrapper[4782]: I0202 11:17:57.133258 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529"] Feb 02 11:17:57 crc kubenswrapper[4782]: I0202 11:17:57.187381 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" event={"ID":"df6c52bb-3b4a-4f78-94d0-edee0f68400c","Type":"ContainerStarted","Data":"35c7e59b308ab8245d7d3efeca54f0472cb9852b42fc4b256469fa44da88e114"} Feb 02 11:17:58 crc kubenswrapper[4782]: I0202 11:17:58.208232 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" event={"ID":"df6c52bb-3b4a-4f78-94d0-edee0f68400c","Type":"ContainerStarted","Data":"3ac45a3cb50d94aec6d4a24305c4a1990039d130ec326129bf072b5fba65c9a1"} Feb 02 11:17:58 crc kubenswrapper[4782]: I0202 11:17:58.232048 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" podStartSLOduration=1.7286526740000001 podStartE2EDuration="2.23202351s" podCreationTimestamp="2026-02-02 11:17:56 +0000 UTC" firstStartedPulling="2026-02-02 11:17:57.140809115 +0000 UTC m=+2357.025001831" lastFinishedPulling="2026-02-02 11:17:57.644179941 +0000 UTC m=+2357.528372667" observedRunningTime="2026-02-02 11:17:58.224163684 +0000 UTC m=+2358.108356420" watchObservedRunningTime="2026-02-02 11:17:58.23202351 +0000 UTC m=+2358.116216226" Feb 02 11:18:02 crc kubenswrapper[4782]: I0202 11:18:02.239384 4782 generic.go:334] "Generic (PLEG): container finished" podID="df6c52bb-3b4a-4f78-94d0-edee0f68400c" containerID="3ac45a3cb50d94aec6d4a24305c4a1990039d130ec326129bf072b5fba65c9a1" exitCode=0 Feb 02 11:18:02 crc kubenswrapper[4782]: I0202 11:18:02.239461 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" event={"ID":"df6c52bb-3b4a-4f78-94d0-edee0f68400c","Type":"ContainerDied","Data":"3ac45a3cb50d94aec6d4a24305c4a1990039d130ec326129bf072b5fba65c9a1"} Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.666383 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.787049 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") pod \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.787128 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") pod \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.787351 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") pod \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.787438 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") pod \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\" (UID: \"df6c52bb-3b4a-4f78-94d0-edee0f68400c\") " Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.796449 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph" (OuterVolumeSpecName: "ceph") pod "df6c52bb-3b4a-4f78-94d0-edee0f68400c" (UID: "df6c52bb-3b4a-4f78-94d0-edee0f68400c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.796738 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b" (OuterVolumeSpecName: "kube-api-access-gcr5b") pod "df6c52bb-3b4a-4f78-94d0-edee0f68400c" (UID: "df6c52bb-3b4a-4f78-94d0-edee0f68400c"). InnerVolumeSpecName "kube-api-access-gcr5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.818039 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory" (OuterVolumeSpecName: "inventory") pod "df6c52bb-3b4a-4f78-94d0-edee0f68400c" (UID: "df6c52bb-3b4a-4f78-94d0-edee0f68400c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.820263 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "df6c52bb-3b4a-4f78-94d0-edee0f68400c" (UID: "df6c52bb-3b4a-4f78-94d0-edee0f68400c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.890294 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.890356 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.890372 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcr5b\" (UniqueName: \"kubernetes.io/projected/df6c52bb-3b4a-4f78-94d0-edee0f68400c-kube-api-access-gcr5b\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:03 crc kubenswrapper[4782]: I0202 11:18:03.890391 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6c52bb-3b4a-4f78-94d0-edee0f68400c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.260856 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" event={"ID":"df6c52bb-3b4a-4f78-94d0-edee0f68400c","Type":"ContainerDied","Data":"35c7e59b308ab8245d7d3efeca54f0472cb9852b42fc4b256469fa44da88e114"} Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.260936 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35c7e59b308ab8245d7d3efeca54f0472cb9852b42fc4b256469fa44da88e114" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.260903 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.406831 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg"] Feb 02 11:18:04 crc kubenswrapper[4782]: E0202 11:18:04.407221 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6c52bb-3b4a-4f78-94d0-edee0f68400c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.407247 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6c52bb-3b4a-4f78-94d0-edee0f68400c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.407428 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6c52bb-3b4a-4f78-94d0-edee0f68400c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.408048 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.411901 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.412860 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.412880 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.412891 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.413077 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.434758 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg"] Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.501281 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.501381 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.501423 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.501444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.602501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.602551 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.602673 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.602724 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.607540 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.610132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.620606 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.621399 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v56zg\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:04 crc kubenswrapper[4782]: I0202 11:18:04.725439 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:05 crc kubenswrapper[4782]: I0202 11:18:05.254724 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg"] Feb 02 11:18:05 crc kubenswrapper[4782]: I0202 11:18:05.272278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" event={"ID":"6dbc340f-2b20-49aa-8358-26223d367e34","Type":"ContainerStarted","Data":"6b039f7b5afb87f81b48a9a10a4024e191757e1ef276cd47214bf98018104fea"} Feb 02 11:18:07 crc kubenswrapper[4782]: I0202 11:18:07.295925 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" event={"ID":"6dbc340f-2b20-49aa-8358-26223d367e34","Type":"ContainerStarted","Data":"0073044f871dd4a25cbe4d73162049a859f0266fbb62237ebe19cbb7776e276f"} Feb 02 11:18:07 crc kubenswrapper[4782]: I0202 11:18:07.319507 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" podStartSLOduration=2.106537535 podStartE2EDuration="3.319478007s" podCreationTimestamp="2026-02-02 11:18:04 +0000 UTC" firstStartedPulling="2026-02-02 11:18:05.263013823 +0000 UTC m=+2365.147206539" lastFinishedPulling="2026-02-02 11:18:06.475954295 +0000 UTC m=+2366.360147011" observedRunningTime="2026-02-02 11:18:07.314781902 +0000 UTC m=+2367.198974618" watchObservedRunningTime="2026-02-02 11:18:07.319478007 +0000 UTC m=+2367.203670723" Feb 02 11:18:22 crc kubenswrapper[4782]: I0202 11:18:22.951493 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:18:22 crc kubenswrapper[4782]: I0202 11:18:22.952514 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:18:47 crc kubenswrapper[4782]: I0202 11:18:47.622878 4782 generic.go:334] "Generic (PLEG): container finished" podID="6dbc340f-2b20-49aa-8358-26223d367e34" containerID="0073044f871dd4a25cbe4d73162049a859f0266fbb62237ebe19cbb7776e276f" exitCode=0 Feb 02 11:18:47 crc kubenswrapper[4782]: I0202 11:18:47.622971 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" event={"ID":"6dbc340f-2b20-49aa-8358-26223d367e34","Type":"ContainerDied","Data":"0073044f871dd4a25cbe4d73162049a859f0266fbb62237ebe19cbb7776e276f"} Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.279141 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.359088 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") pod \"6dbc340f-2b20-49aa-8358-26223d367e34\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.359436 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") pod \"6dbc340f-2b20-49aa-8358-26223d367e34\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.359525 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") pod \"6dbc340f-2b20-49aa-8358-26223d367e34\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.359578 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") pod \"6dbc340f-2b20-49aa-8358-26223d367e34\" (UID: \"6dbc340f-2b20-49aa-8358-26223d367e34\") " Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.366080 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs" (OuterVolumeSpecName: "kube-api-access-shscs") pod "6dbc340f-2b20-49aa-8358-26223d367e34" (UID: "6dbc340f-2b20-49aa-8358-26223d367e34"). InnerVolumeSpecName "kube-api-access-shscs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.382460 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph" (OuterVolumeSpecName: "ceph") pod "6dbc340f-2b20-49aa-8358-26223d367e34" (UID: "6dbc340f-2b20-49aa-8358-26223d367e34"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.385029 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6dbc340f-2b20-49aa-8358-26223d367e34" (UID: "6dbc340f-2b20-49aa-8358-26223d367e34"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.390471 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory" (OuterVolumeSpecName: "inventory") pod "6dbc340f-2b20-49aa-8358-26223d367e34" (UID: "6dbc340f-2b20-49aa-8358-26223d367e34"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.461983 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.462026 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.462038 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shscs\" (UniqueName: \"kubernetes.io/projected/6dbc340f-2b20-49aa-8358-26223d367e34-kube-api-access-shscs\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.462048 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbc340f-2b20-49aa-8358-26223d367e34-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.639236 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" event={"ID":"6dbc340f-2b20-49aa-8358-26223d367e34","Type":"ContainerDied","Data":"6b039f7b5afb87f81b48a9a10a4024e191757e1ef276cd47214bf98018104fea"} Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.639280 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b039f7b5afb87f81b48a9a10a4024e191757e1ef276cd47214bf98018104fea" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.639334 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v56zg" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.728169 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5858"] Feb 02 11:18:49 crc kubenswrapper[4782]: E0202 11:18:49.728519 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc340f-2b20-49aa-8358-26223d367e34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.728542 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc340f-2b20-49aa-8358-26223d367e34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.728702 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbc340f-2b20-49aa-8358-26223d367e34" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.729220 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.731088 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.731127 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.731441 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.732131 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.733388 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.747753 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5858"] Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.892144 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.892224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.892344 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.892378 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.994197 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.994322 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.994395 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:49 crc kubenswrapper[4782]: I0202 11:18:49.994711 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:49.999977 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.000527 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.018573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.020084 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") pod \"ssh-known-hosts-edpm-deployment-j5858\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.101829 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:18:50 crc kubenswrapper[4782]: I0202 11:18:50.707342 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j5858"] Feb 02 11:18:51 crc kubenswrapper[4782]: I0202 11:18:51.658159 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" event={"ID":"c80c4993-adf6-44f8-a084-21920191de7f","Type":"ContainerStarted","Data":"d76ce0556a2fc0cf77f556e5511fd48a3975db4277a83cae1c4bc1a7fcfd2287"} Feb 02 11:18:51 crc kubenswrapper[4782]: I0202 11:18:51.658553 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" event={"ID":"c80c4993-adf6-44f8-a084-21920191de7f","Type":"ContainerStarted","Data":"559b77b336739e3a20431c61a7983c9be76a44e129b7fd8382612c9058b5762c"} Feb 02 11:18:51 crc kubenswrapper[4782]: I0202 11:18:51.679944 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" podStartSLOduration=2.239430591 podStartE2EDuration="2.679925949s" podCreationTimestamp="2026-02-02 11:18:49 +0000 UTC" firstStartedPulling="2026-02-02 11:18:50.706798431 +0000 UTC m=+2410.590991147" lastFinishedPulling="2026-02-02 11:18:51.147293789 +0000 UTC m=+2411.031486505" observedRunningTime="2026-02-02 11:18:51.677750526 +0000 UTC m=+2411.561943252" watchObservedRunningTime="2026-02-02 11:18:51.679925949 +0000 UTC m=+2411.564118665" Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.951051 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.951378 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.951427 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.952239 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:18:52 crc kubenswrapper[4782]: I0202 11:18:52.952289 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" gracePeriod=600 Feb 02 11:18:53 crc kubenswrapper[4782]: E0202 11:18:53.078470 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:18:53 crc kubenswrapper[4782]: I0202 11:18:53.676281 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" exitCode=0 Feb 02 11:18:53 crc kubenswrapper[4782]: I0202 11:18:53.676333 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9"} Feb 02 11:18:53 crc kubenswrapper[4782]: I0202 11:18:53.676370 4782 scope.go:117] "RemoveContainer" containerID="f12380752e6de4f8dedc92e062f8cb6f3d5a16260278e7b8b47bff7dc97ca296" Feb 02 11:18:53 crc kubenswrapper[4782]: I0202 11:18:53.677031 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:18:53 crc kubenswrapper[4782]: E0202 11:18:53.677313 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:18:59 crc kubenswrapper[4782]: I0202 11:18:59.727985 4782 generic.go:334] "Generic (PLEG): container finished" podID="c80c4993-adf6-44f8-a084-21920191de7f" containerID="d76ce0556a2fc0cf77f556e5511fd48a3975db4277a83cae1c4bc1a7fcfd2287" exitCode=0 Feb 02 11:18:59 crc kubenswrapper[4782]: I0202 11:18:59.728084 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" event={"ID":"c80c4993-adf6-44f8-a084-21920191de7f","Type":"ContainerDied","Data":"d76ce0556a2fc0cf77f556e5511fd48a3975db4277a83cae1c4bc1a7fcfd2287"} Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.107686 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.194527 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") pod \"c80c4993-adf6-44f8-a084-21920191de7f\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.194628 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") pod \"c80c4993-adf6-44f8-a084-21920191de7f\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.194659 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") pod \"c80c4993-adf6-44f8-a084-21920191de7f\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.194702 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") pod \"c80c4993-adf6-44f8-a084-21920191de7f\" (UID: \"c80c4993-adf6-44f8-a084-21920191de7f\") " Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.199960 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg" (OuterVolumeSpecName: "kube-api-access-jrhdg") pod "c80c4993-adf6-44f8-a084-21920191de7f" (UID: "c80c4993-adf6-44f8-a084-21920191de7f"). InnerVolumeSpecName "kube-api-access-jrhdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.200247 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph" (OuterVolumeSpecName: "ceph") pod "c80c4993-adf6-44f8-a084-21920191de7f" (UID: "c80c4993-adf6-44f8-a084-21920191de7f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.224048 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c80c4993-adf6-44f8-a084-21920191de7f" (UID: "c80c4993-adf6-44f8-a084-21920191de7f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.235815 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c80c4993-adf6-44f8-a084-21920191de7f" (UID: "c80c4993-adf6-44f8-a084-21920191de7f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.297056 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhdg\" (UniqueName: \"kubernetes.io/projected/c80c4993-adf6-44f8-a084-21920191de7f-kube-api-access-jrhdg\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.297093 4782 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.297109 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.297121 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c80c4993-adf6-44f8-a084-21920191de7f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.743522 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" event={"ID":"c80c4993-adf6-44f8-a084-21920191de7f","Type":"ContainerDied","Data":"559b77b336739e3a20431c61a7983c9be76a44e129b7fd8382612c9058b5762c"} Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.743563 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559b77b336739e3a20431c61a7983c9be76a44e129b7fd8382612c9058b5762c" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.743663 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j5858" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.821141 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6"] Feb 02 11:19:01 crc kubenswrapper[4782]: E0202 11:19:01.821484 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80c4993-adf6-44f8-a084-21920191de7f" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.821500 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80c4993-adf6-44f8-a084-21920191de7f" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.821696 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80c4993-adf6-44f8-a084-21920191de7f" containerName="ssh-known-hosts-edpm-deployment" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.822236 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.824628 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.824834 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.825080 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.825222 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.825468 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.840508 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6"] Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.907322 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.907695 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.907812 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:01 crc kubenswrapper[4782]: I0202 11:19:01.907927 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.009243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.009617 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.010466 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.010937 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.016601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.027583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.029381 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.029926 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7pvt6\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.140803 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.666276 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6"] Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.671587 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:19:02 crc kubenswrapper[4782]: I0202 11:19:02.754410 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" event={"ID":"e25dd29c-ad04-40c3-a682-352af21186fe","Type":"ContainerStarted","Data":"d21aee011421776520f27241faf1344e53f0713b464c3544cc7018eb872d1635"} Feb 02 11:19:03 crc kubenswrapper[4782]: I0202 11:19:03.762955 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" event={"ID":"e25dd29c-ad04-40c3-a682-352af21186fe","Type":"ContainerStarted","Data":"74053c80371c983af25f2d15b67b98314186781dec8007085ebf9e0dec406d75"} Feb 02 11:19:03 crc kubenswrapper[4782]: I0202 11:19:03.781576 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" podStartSLOduration=2.230938424 podStartE2EDuration="2.781553751s" podCreationTimestamp="2026-02-02 11:19:01 +0000 UTC" firstStartedPulling="2026-02-02 11:19:02.671318957 +0000 UTC m=+2422.555511673" lastFinishedPulling="2026-02-02 11:19:03.221934284 +0000 UTC m=+2423.106127000" observedRunningTime="2026-02-02 11:19:03.778556935 +0000 UTC m=+2423.662749661" watchObservedRunningTime="2026-02-02 11:19:03.781553751 +0000 UTC m=+2423.665746467" Feb 02 11:19:07 crc kubenswrapper[4782]: I0202 11:19:07.822403 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:19:07 crc kubenswrapper[4782]: E0202 11:19:07.823059 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:19:10 crc kubenswrapper[4782]: I0202 11:19:10.817426 4782 generic.go:334] "Generic (PLEG): container finished" podID="e25dd29c-ad04-40c3-a682-352af21186fe" containerID="74053c80371c983af25f2d15b67b98314186781dec8007085ebf9e0dec406d75" exitCode=0 Feb 02 11:19:10 crc kubenswrapper[4782]: I0202 11:19:10.817598 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" event={"ID":"e25dd29c-ad04-40c3-a682-352af21186fe","Type":"ContainerDied","Data":"74053c80371c983af25f2d15b67b98314186781dec8007085ebf9e0dec406d75"} Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.277374 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.436950 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") pod \"e25dd29c-ad04-40c3-a682-352af21186fe\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.437128 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") pod \"e25dd29c-ad04-40c3-a682-352af21186fe\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.437189 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") pod \"e25dd29c-ad04-40c3-a682-352af21186fe\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.437214 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") pod \"e25dd29c-ad04-40c3-a682-352af21186fe\" (UID: \"e25dd29c-ad04-40c3-a682-352af21186fe\") " Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.442698 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph" (OuterVolumeSpecName: "ceph") pod "e25dd29c-ad04-40c3-a682-352af21186fe" (UID: "e25dd29c-ad04-40c3-a682-352af21186fe"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.443414 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99" (OuterVolumeSpecName: "kube-api-access-46x99") pod "e25dd29c-ad04-40c3-a682-352af21186fe" (UID: "e25dd29c-ad04-40c3-a682-352af21186fe"). InnerVolumeSpecName "kube-api-access-46x99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.473192 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory" (OuterVolumeSpecName: "inventory") pod "e25dd29c-ad04-40c3-a682-352af21186fe" (UID: "e25dd29c-ad04-40c3-a682-352af21186fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.474862 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e25dd29c-ad04-40c3-a682-352af21186fe" (UID: "e25dd29c-ad04-40c3-a682-352af21186fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.539555 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.539918 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46x99\" (UniqueName: \"kubernetes.io/projected/e25dd29c-ad04-40c3-a682-352af21186fe-kube-api-access-46x99\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.540010 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.540093 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e25dd29c-ad04-40c3-a682-352af21186fe-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.844396 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" event={"ID":"e25dd29c-ad04-40c3-a682-352af21186fe","Type":"ContainerDied","Data":"d21aee011421776520f27241faf1344e53f0713b464c3544cc7018eb872d1635"} Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.844852 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21aee011421776520f27241faf1344e53f0713b464c3544cc7018eb872d1635" Feb 02 11:19:12 crc kubenswrapper[4782]: I0202 11:19:12.844887 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7pvt6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.005962 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6"] Feb 02 11:19:13 crc kubenswrapper[4782]: E0202 11:19:13.006317 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25dd29c-ad04-40c3-a682-352af21186fe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.006333 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25dd29c-ad04-40c3-a682-352af21186fe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.006534 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25dd29c-ad04-40c3-a682-352af21186fe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.007245 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.018158 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.018212 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.018304 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.018489 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.019187 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.019395 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6"] Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.155609 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.155675 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.156014 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.156091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.258366 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.258424 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.258487 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.258504 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.268384 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.268384 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.268815 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.287071 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.331883 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:13 crc kubenswrapper[4782]: I0202 11:19:13.887278 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6"] Feb 02 11:19:14 crc kubenswrapper[4782]: I0202 11:19:14.865466 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" event={"ID":"cfbbb165-d7b2-48c8-b778-5c66afa9c34d","Type":"ContainerStarted","Data":"26b113f966c08a0ed30f4ab74c4b07f22575994e2bd68e638614a034657ae012"} Feb 02 11:19:14 crc kubenswrapper[4782]: I0202 11:19:14.866067 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" event={"ID":"cfbbb165-d7b2-48c8-b778-5c66afa9c34d","Type":"ContainerStarted","Data":"8ba29fe52f1f75d05e90d7384ce81ec580523b328891b7732679d01354979b82"} Feb 02 11:19:14 crc kubenswrapper[4782]: I0202 11:19:14.882082 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" podStartSLOduration=2.289985738 podStartE2EDuration="2.882065249s" podCreationTimestamp="2026-02-02 11:19:12 +0000 UTC" firstStartedPulling="2026-02-02 11:19:13.894727312 +0000 UTC m=+2433.778920028" lastFinishedPulling="2026-02-02 11:19:14.486806833 +0000 UTC m=+2434.370999539" observedRunningTime="2026-02-02 11:19:14.88138192 +0000 UTC m=+2434.765574636" watchObservedRunningTime="2026-02-02 11:19:14.882065249 +0000 UTC m=+2434.766257965" Feb 02 11:19:20 crc kubenswrapper[4782]: I0202 11:19:20.827787 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:19:20 crc kubenswrapper[4782]: E0202 11:19:20.829004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:19:23 crc kubenswrapper[4782]: I0202 11:19:23.932605 4782 generic.go:334] "Generic (PLEG): container finished" podID="cfbbb165-d7b2-48c8-b778-5c66afa9c34d" containerID="26b113f966c08a0ed30f4ab74c4b07f22575994e2bd68e638614a034657ae012" exitCode=0 Feb 02 11:19:23 crc kubenswrapper[4782]: I0202 11:19:23.932678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" event={"ID":"cfbbb165-d7b2-48c8-b778-5c66afa9c34d","Type":"ContainerDied","Data":"26b113f966c08a0ed30f4ab74c4b07f22575994e2bd68e638614a034657ae012"} Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.308333 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.387625 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") pod \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.387757 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") pod \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.387916 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") pod \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.387962 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") pod \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\" (UID: \"cfbbb165-d7b2-48c8-b778-5c66afa9c34d\") " Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.399845 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph" (OuterVolumeSpecName: "ceph") pod "cfbbb165-d7b2-48c8-b778-5c66afa9c34d" (UID: "cfbbb165-d7b2-48c8-b778-5c66afa9c34d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.400036 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw" (OuterVolumeSpecName: "kube-api-access-czmtw") pod "cfbbb165-d7b2-48c8-b778-5c66afa9c34d" (UID: "cfbbb165-d7b2-48c8-b778-5c66afa9c34d"). InnerVolumeSpecName "kube-api-access-czmtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.417150 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory" (OuterVolumeSpecName: "inventory") pod "cfbbb165-d7b2-48c8-b778-5c66afa9c34d" (UID: "cfbbb165-d7b2-48c8-b778-5c66afa9c34d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.422620 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cfbbb165-d7b2-48c8-b778-5c66afa9c34d" (UID: "cfbbb165-d7b2-48c8-b778-5c66afa9c34d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.490376 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.490441 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czmtw\" (UniqueName: \"kubernetes.io/projected/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-kube-api-access-czmtw\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.490462 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.490474 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfbbb165-d7b2-48c8-b778-5c66afa9c34d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.948568 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" event={"ID":"cfbbb165-d7b2-48c8-b778-5c66afa9c34d","Type":"ContainerDied","Data":"8ba29fe52f1f75d05e90d7384ce81ec580523b328891b7732679d01354979b82"} Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.948877 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ba29fe52f1f75d05e90d7384ce81ec580523b328891b7732679d01354979b82" Feb 02 11:19:25 crc kubenswrapper[4782]: I0202 11:19:25.948617 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.042448 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96"] Feb 02 11:19:26 crc kubenswrapper[4782]: E0202 11:19:26.043944 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfbbb165-d7b2-48c8-b778-5c66afa9c34d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.043969 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfbbb165-d7b2-48c8-b778-5c66afa9c34d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.044155 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfbbb165-d7b2-48c8-b778-5c66afa9c34d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.045188 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.048820 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.049518 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051420 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051601 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051725 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051899 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.051906 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.052380 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.076461 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96"] Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.202564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.202613 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.202936 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203163 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203264 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203367 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203405 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203483 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203518 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203651 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.203708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305817 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305862 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305948 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305967 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.305995 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306023 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306048 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306079 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306099 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306125 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.306184 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.312203 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.313852 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.316341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.318542 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.318808 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.319611 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.319720 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.319904 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.319909 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.321180 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.321188 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.324729 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.325021 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4jg96\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.364493 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.877325 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96"] Feb 02 11:19:26 crc kubenswrapper[4782]: I0202 11:19:26.959924 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" event={"ID":"ae3151c2-1646-4d94-93d0-df34ad53d344","Type":"ContainerStarted","Data":"e508ea928e1afeacf63b3b4096d6a89ed38e56b8b7c800b3a69b6eb8ab4fdf4e"} Feb 02 11:19:28 crc kubenswrapper[4782]: I0202 11:19:28.978166 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" event={"ID":"ae3151c2-1646-4d94-93d0-df34ad53d344","Type":"ContainerStarted","Data":"98ca67596d9a2cdbd4c2ea4268ddb6b30d3462bd3ba6c9c09fa209503619836c"} Feb 02 11:19:29 crc kubenswrapper[4782]: I0202 11:19:29.003118 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" podStartSLOduration=2.016322773 podStartE2EDuration="3.003093804s" podCreationTimestamp="2026-02-02 11:19:26 +0000 UTC" firstStartedPulling="2026-02-02 11:19:26.885939629 +0000 UTC m=+2446.770132345" lastFinishedPulling="2026-02-02 11:19:27.87271065 +0000 UTC m=+2447.756903376" observedRunningTime="2026-02-02 11:19:28.998827921 +0000 UTC m=+2448.883020637" watchObservedRunningTime="2026-02-02 11:19:29.003093804 +0000 UTC m=+2448.887286540" Feb 02 11:19:35 crc kubenswrapper[4782]: I0202 11:19:35.821285 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:19:35 crc kubenswrapper[4782]: E0202 11:19:35.822100 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:19:47 crc kubenswrapper[4782]: I0202 11:19:47.821724 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:19:47 crc kubenswrapper[4782]: E0202 11:19:47.822541 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:19:57 crc kubenswrapper[4782]: I0202 11:19:57.188843 4782 generic.go:334] "Generic (PLEG): container finished" podID="ae3151c2-1646-4d94-93d0-df34ad53d344" containerID="98ca67596d9a2cdbd4c2ea4268ddb6b30d3462bd3ba6c9c09fa209503619836c" exitCode=0 Feb 02 11:19:57 crc kubenswrapper[4782]: I0202 11:19:57.188922 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" event={"ID":"ae3151c2-1646-4d94-93d0-df34ad53d344","Type":"ContainerDied","Data":"98ca67596d9a2cdbd4c2ea4268ddb6b30d3462bd3ba6c9c09fa209503619836c"} Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.607260 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.768158 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.768225 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769091 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769136 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769153 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769176 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769205 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769329 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769350 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769373 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769404 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769485 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.769534 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ae3151c2-1646-4d94-93d0-df34ad53d344\" (UID: \"ae3151c2-1646-4d94-93d0-df34ad53d344\") " Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.775417 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.775565 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.776231 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.776281 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.777851 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.778446 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.778530 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc" (OuterVolumeSpecName: "kube-api-access-b6npc") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "kube-api-access-b6npc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.779160 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.780872 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.781994 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph" (OuterVolumeSpecName: "ceph") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.803372 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.802604 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.827300 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory" (OuterVolumeSpecName: "inventory") pod "ae3151c2-1646-4d94-93d0-df34ad53d344" (UID: "ae3151c2-1646-4d94-93d0-df34ad53d344"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871615 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871669 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871680 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871690 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6npc\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-kube-api-access-b6npc\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871699 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871708 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ae3151c2-1646-4d94-93d0-df34ad53d344-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871718 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871728 4782 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871738 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871746 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871755 4782 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871763 4782 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:58 crc kubenswrapper[4782]: I0202 11:19:58.871775 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae3151c2-1646-4d94-93d0-df34ad53d344-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.207100 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" event={"ID":"ae3151c2-1646-4d94-93d0-df34ad53d344","Type":"ContainerDied","Data":"e508ea928e1afeacf63b3b4096d6a89ed38e56b8b7c800b3a69b6eb8ab4fdf4e"} Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.207133 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e508ea928e1afeacf63b3b4096d6a89ed38e56b8b7c800b3a69b6eb8ab4fdf4e" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.207141 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4jg96" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.449286 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb"] Feb 02 11:19:59 crc kubenswrapper[4782]: E0202 11:19:59.449749 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3151c2-1646-4d94-93d0-df34ad53d344" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.449773 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3151c2-1646-4d94-93d0-df34ad53d344" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.449968 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3151c2-1646-4d94-93d0-df34ad53d344" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.450741 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.454038 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.454444 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.454733 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.454753 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.462264 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.480896 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb"] Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.481039 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.481097 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.481210 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.481256 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.582935 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.583191 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.583306 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.583469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.591415 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.591442 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.594103 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.617356 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:19:59 crc kubenswrapper[4782]: I0202 11:19:59.765662 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:20:00 crc kubenswrapper[4782]: I0202 11:20:00.307481 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb"] Feb 02 11:20:01 crc kubenswrapper[4782]: I0202 11:20:01.232292 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" event={"ID":"c0c31114-71d7-4d0b-9ad7-74945ed819e3","Type":"ContainerStarted","Data":"b1b912997a84ded9491b2639ec1a478ecd95835ed273174c31ee28a90b396db3"} Feb 02 11:20:02 crc kubenswrapper[4782]: I0202 11:20:02.244227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" event={"ID":"c0c31114-71d7-4d0b-9ad7-74945ed819e3","Type":"ContainerStarted","Data":"f3fb12fed1ebe9e4eb9525d8eeb5e45a8ffb7c55abf9d79d36f5b948b06bec47"} Feb 02 11:20:02 crc kubenswrapper[4782]: I0202 11:20:02.821878 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:02 crc kubenswrapper[4782]: E0202 11:20:02.822136 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:20:07 crc kubenswrapper[4782]: I0202 11:20:07.288377 4782 generic.go:334] "Generic (PLEG): container finished" podID="c0c31114-71d7-4d0b-9ad7-74945ed819e3" containerID="f3fb12fed1ebe9e4eb9525d8eeb5e45a8ffb7c55abf9d79d36f5b948b06bec47" exitCode=0 Feb 02 11:20:07 crc kubenswrapper[4782]: I0202 11:20:07.288612 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" event={"ID":"c0c31114-71d7-4d0b-9ad7-74945ed819e3","Type":"ContainerDied","Data":"f3fb12fed1ebe9e4eb9525d8eeb5e45a8ffb7c55abf9d79d36f5b948b06bec47"} Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.755762 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.874095 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") pod \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.874317 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") pod \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.874393 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") pod \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.874501 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") pod \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\" (UID: \"c0c31114-71d7-4d0b-9ad7-74945ed819e3\") " Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.948881 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph" (OuterVolumeSpecName: "ceph") pod "c0c31114-71d7-4d0b-9ad7-74945ed819e3" (UID: "c0c31114-71d7-4d0b-9ad7-74945ed819e3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.949931 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc" (OuterVolumeSpecName: "kube-api-access-9flfc") pod "c0c31114-71d7-4d0b-9ad7-74945ed819e3" (UID: "c0c31114-71d7-4d0b-9ad7-74945ed819e3"). InnerVolumeSpecName "kube-api-access-9flfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.954495 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory" (OuterVolumeSpecName: "inventory") pod "c0c31114-71d7-4d0b-9ad7-74945ed819e3" (UID: "c0c31114-71d7-4d0b-9ad7-74945ed819e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.958065 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c0c31114-71d7-4d0b-9ad7-74945ed819e3" (UID: "c0c31114-71d7-4d0b-9ad7-74945ed819e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.996311 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.996351 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9flfc\" (UniqueName: \"kubernetes.io/projected/c0c31114-71d7-4d0b-9ad7-74945ed819e3-kube-api-access-9flfc\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.996362 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:08 crc kubenswrapper[4782]: I0202 11:20:08.996374 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c31114-71d7-4d0b-9ad7-74945ed819e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.309479 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" event={"ID":"c0c31114-71d7-4d0b-9ad7-74945ed819e3","Type":"ContainerDied","Data":"b1b912997a84ded9491b2639ec1a478ecd95835ed273174c31ee28a90b396db3"} Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.309529 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b912997a84ded9491b2639ec1a478ecd95835ed273174c31ee28a90b396db3" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.309583 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.416514 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6"] Feb 02 11:20:09 crc kubenswrapper[4782]: E0202 11:20:09.416952 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c31114-71d7-4d0b-9ad7-74945ed819e3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.416972 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c31114-71d7-4d0b-9ad7-74945ed819e3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.417173 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c31114-71d7-4d0b-9ad7-74945ed819e3" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.417910 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.419658 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.420104 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.421085 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.421257 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.423426 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.427254 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.434828 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6"] Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.605934 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.605999 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.606038 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.606099 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.606190 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.606209 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.707997 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708120 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708162 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708198 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.708278 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.709412 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.715163 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.715510 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.716383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.727633 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.737621 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sffk6\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:09 crc kubenswrapper[4782]: I0202 11:20:09.740945 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:20:10 crc kubenswrapper[4782]: I0202 11:20:10.161432 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6"] Feb 02 11:20:10 crc kubenswrapper[4782]: I0202 11:20:10.321792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" event={"ID":"4a473fb4-7a3c-4103-bad5-570b683e6222","Type":"ContainerStarted","Data":"72544a004c2f1da86f062d201fe9dd1dd044fb710909d57929aa10ec1c1eafcb"} Feb 02 11:20:11 crc kubenswrapper[4782]: I0202 11:20:11.333486 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" event={"ID":"4a473fb4-7a3c-4103-bad5-570b683e6222","Type":"ContainerStarted","Data":"acc292f462d713c39fcd55399a1ba18c9c5c3f1db54d2761b6d464dccea5645f"} Feb 02 11:20:11 crc kubenswrapper[4782]: I0202 11:20:11.360203 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" podStartSLOduration=1.7983493799999999 podStartE2EDuration="2.36016783s" podCreationTimestamp="2026-02-02 11:20:09 +0000 UTC" firstStartedPulling="2026-02-02 11:20:10.174999309 +0000 UTC m=+2490.059192025" lastFinishedPulling="2026-02-02 11:20:10.736817759 +0000 UTC m=+2490.621010475" observedRunningTime="2026-02-02 11:20:11.356835654 +0000 UTC m=+2491.241028370" watchObservedRunningTime="2026-02-02 11:20:11.36016783 +0000 UTC m=+2491.244360536" Feb 02 11:20:14 crc kubenswrapper[4782]: I0202 11:20:14.821418 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:14 crc kubenswrapper[4782]: E0202 11:20:14.822428 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:20:26 crc kubenswrapper[4782]: I0202 11:20:26.822236 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:26 crc kubenswrapper[4782]: E0202 11:20:26.823463 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:20:41 crc kubenswrapper[4782]: I0202 11:20:41.821798 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:41 crc kubenswrapper[4782]: E0202 11:20:41.822863 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:20:55 crc kubenswrapper[4782]: I0202 11:20:55.821996 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:20:55 crc kubenswrapper[4782]: E0202 11:20:55.822841 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:21:10 crc kubenswrapper[4782]: I0202 11:21:10.826178 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:21:10 crc kubenswrapper[4782]: E0202 11:21:10.826926 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:21:23 crc kubenswrapper[4782]: I0202 11:21:23.821807 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:21:23 crc kubenswrapper[4782]: E0202 11:21:23.822942 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:21:23 crc kubenswrapper[4782]: I0202 11:21:23.994437 4782 generic.go:334] "Generic (PLEG): container finished" podID="4a473fb4-7a3c-4103-bad5-570b683e6222" containerID="acc292f462d713c39fcd55399a1ba18c9c5c3f1db54d2761b6d464dccea5645f" exitCode=0 Feb 02 11:21:23 crc kubenswrapper[4782]: I0202 11:21:23.994487 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" event={"ID":"4a473fb4-7a3c-4103-bad5-570b683e6222","Type":"ContainerDied","Data":"acc292f462d713c39fcd55399a1ba18c9c5c3f1db54d2761b6d464dccea5645f"} Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.374512 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.486936 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.486999 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.487074 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.487113 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.487139 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.487248 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") pod \"4a473fb4-7a3c-4103-bad5-570b683e6222\" (UID: \"4a473fb4-7a3c-4103-bad5-570b683e6222\") " Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.492966 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph" (OuterVolumeSpecName: "ceph") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.493222 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2" (OuterVolumeSpecName: "kube-api-access-98st2") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "kube-api-access-98st2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.500479 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.515559 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.517696 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.518336 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory" (OuterVolumeSpecName: "inventory") pod "4a473fb4-7a3c-4103-bad5-570b683e6222" (UID: "4a473fb4-7a3c-4103-bad5-570b683e6222"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589242 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589278 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589289 4782 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4a473fb4-7a3c-4103-bad5-570b683e6222-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589300 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98st2\" (UniqueName: \"kubernetes.io/projected/4a473fb4-7a3c-4103-bad5-570b683e6222-kube-api-access-98st2\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589308 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:25 crc kubenswrapper[4782]: I0202 11:21:25.589316 4782 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a473fb4-7a3c-4103-bad5-570b683e6222-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.012890 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" event={"ID":"4a473fb4-7a3c-4103-bad5-570b683e6222","Type":"ContainerDied","Data":"72544a004c2f1da86f062d201fe9dd1dd044fb710909d57929aa10ec1c1eafcb"} Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.012928 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sffk6" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.012932 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72544a004c2f1da86f062d201fe9dd1dd044fb710909d57929aa10ec1c1eafcb" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.219769 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7"] Feb 02 11:21:26 crc kubenswrapper[4782]: E0202 11:21:26.220201 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a473fb4-7a3c-4103-bad5-570b683e6222" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.220221 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a473fb4-7a3c-4103-bad5-570b683e6222" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.220420 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a473fb4-7a3c-4103-bad5-570b683e6222" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.222438 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.224862 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226584 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226615 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226672 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226590 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.226865 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.235126 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.245818 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7"] Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.308718 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.308799 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.308947 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.309053 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.309085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.309145 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.309233 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.410505 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.410922 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.410949 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.410976 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.411013 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.411043 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.411073 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.414371 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.414871 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.415279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.417525 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.418126 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.421556 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.433566 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:26 crc kubenswrapper[4782]: I0202 11:21:26.544271 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:21:27 crc kubenswrapper[4782]: I0202 11:21:27.116581 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7"] Feb 02 11:21:28 crc kubenswrapper[4782]: I0202 11:21:28.030773 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" event={"ID":"e6849945-28f4-4218-97c1-6047c2d0c368","Type":"ContainerStarted","Data":"5b4f07ff15cbaace5049bb3eff108520caa8f61961e415e66aa5a801b9e5887e"} Feb 02 11:21:29 crc kubenswrapper[4782]: I0202 11:21:29.040432 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" event={"ID":"e6849945-28f4-4218-97c1-6047c2d0c368","Type":"ContainerStarted","Data":"c972dc47d6157e350c05293015a8979dd3252340d3b9b01b92f1e1cd1f5ff0df"} Feb 02 11:21:29 crc kubenswrapper[4782]: I0202 11:21:29.060430 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" podStartSLOduration=2.122294764 podStartE2EDuration="3.060410494s" podCreationTimestamp="2026-02-02 11:21:26 +0000 UTC" firstStartedPulling="2026-02-02 11:21:27.12290253 +0000 UTC m=+2567.007095246" lastFinishedPulling="2026-02-02 11:21:28.06101826 +0000 UTC m=+2567.945210976" observedRunningTime="2026-02-02 11:21:29.054070241 +0000 UTC m=+2568.938262957" watchObservedRunningTime="2026-02-02 11:21:29.060410494 +0000 UTC m=+2568.944603210" Feb 02 11:21:38 crc kubenswrapper[4782]: I0202 11:21:38.821583 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:21:38 crc kubenswrapper[4782]: E0202 11:21:38.822439 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:21:53 crc kubenswrapper[4782]: I0202 11:21:53.824473 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:21:53 crc kubenswrapper[4782]: E0202 11:21:53.825532 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:05 crc kubenswrapper[4782]: I0202 11:22:05.821173 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:05 crc kubenswrapper[4782]: E0202 11:22:05.822069 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:20 crc kubenswrapper[4782]: I0202 11:22:20.827326 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:20 crc kubenswrapper[4782]: E0202 11:22:20.828235 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:23 crc kubenswrapper[4782]: I0202 11:22:23.488777 4782 generic.go:334] "Generic (PLEG): container finished" podID="e6849945-28f4-4218-97c1-6047c2d0c368" containerID="c972dc47d6157e350c05293015a8979dd3252340d3b9b01b92f1e1cd1f5ff0df" exitCode=0 Feb 02 11:22:23 crc kubenswrapper[4782]: I0202 11:22:23.488883 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" event={"ID":"e6849945-28f4-4218-97c1-6047c2d0c368","Type":"ContainerDied","Data":"c972dc47d6157e350c05293015a8979dd3252340d3b9b01b92f1e1cd1f5ff0df"} Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.901129 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941761 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941832 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941899 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941951 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.941994 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.942026 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.942065 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e6849945-28f4-4218-97c1-6047c2d0c368\" (UID: \"e6849945-28f4-4218-97c1-6047c2d0c368\") " Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.956325 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph" (OuterVolumeSpecName: "ceph") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.956382 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.963303 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm" (OuterVolumeSpecName: "kube-api-access-wfhcm") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "kube-api-access-wfhcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.968171 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.970236 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.970552 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory" (OuterVolumeSpecName: "inventory") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:24 crc kubenswrapper[4782]: I0202 11:22:24.977983 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6849945-28f4-4218-97c1-6047c2d0c368" (UID: "e6849945-28f4-4218-97c1-6047c2d0c368"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044860 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044904 4782 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044916 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044926 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044937 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044949 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfhcm\" (UniqueName: \"kubernetes.io/projected/e6849945-28f4-4218-97c1-6047c2d0c368-kube-api-access-wfhcm\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.044958 4782 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e6849945-28f4-4218-97c1-6047c2d0c368-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.504304 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" event={"ID":"e6849945-28f4-4218-97c1-6047c2d0c368","Type":"ContainerDied","Data":"5b4f07ff15cbaace5049bb3eff108520caa8f61961e415e66aa5a801b9e5887e"} Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.504346 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b4f07ff15cbaace5049bb3eff108520caa8f61961e415e66aa5a801b9e5887e" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.504408 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.625232 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj"] Feb 02 11:22:25 crc kubenswrapper[4782]: E0202 11:22:25.625632 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6849945-28f4-4218-97c1-6047c2d0c368" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.625679 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6849945-28f4-4218-97c1-6047c2d0c368" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.625896 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6849945-28f4-4218-97c1-6047c2d0c368" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.626571 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.629598 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.629814 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.629905 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.629907 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.632090 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.633473 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.640030 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj"] Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.657935 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658224 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658349 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658440 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658555 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.658697 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760024 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760103 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760242 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760282 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.760299 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.765184 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.765206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.765531 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.766972 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.772099 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.782527 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fjczj\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:25 crc kubenswrapper[4782]: I0202 11:22:25.944809 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:22:26 crc kubenswrapper[4782]: I0202 11:22:26.465438 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj"] Feb 02 11:22:26 crc kubenswrapper[4782]: W0202 11:22:26.470825 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b66a766_dc87_45dd_a611_d9a30c3f327e.slice/crio-8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821 WatchSource:0}: Error finding container 8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821: Status 404 returned error can't find the container with id 8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821 Feb 02 11:22:26 crc kubenswrapper[4782]: I0202 11:22:26.515979 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" event={"ID":"9b66a766-dc87-45dd-a611-d9a30c3f327e","Type":"ContainerStarted","Data":"8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821"} Feb 02 11:22:27 crc kubenswrapper[4782]: I0202 11:22:27.527759 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" event={"ID":"9b66a766-dc87-45dd-a611-d9a30c3f327e","Type":"ContainerStarted","Data":"4941491cf1df1bc7280b824efa5aa4ba9575dbca5ae4407e9126b0211ca2c981"} Feb 02 11:22:27 crc kubenswrapper[4782]: I0202 11:22:27.550556 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" podStartSLOduration=2.042507718 podStartE2EDuration="2.55053748s" podCreationTimestamp="2026-02-02 11:22:25 +0000 UTC" firstStartedPulling="2026-02-02 11:22:26.474130329 +0000 UTC m=+2626.358323045" lastFinishedPulling="2026-02-02 11:22:26.982160091 +0000 UTC m=+2626.866352807" observedRunningTime="2026-02-02 11:22:27.546526945 +0000 UTC m=+2627.430719671" watchObservedRunningTime="2026-02-02 11:22:27.55053748 +0000 UTC m=+2627.434730196" Feb 02 11:22:31 crc kubenswrapper[4782]: I0202 11:22:31.822701 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:31 crc kubenswrapper[4782]: E0202 11:22:31.823906 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:45 crc kubenswrapper[4782]: I0202 11:22:45.820946 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:45 crc kubenswrapper[4782]: E0202 11:22:45.821885 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:22:59 crc kubenswrapper[4782]: I0202 11:22:59.821326 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:22:59 crc kubenswrapper[4782]: E0202 11:22:59.822088 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:23:13 crc kubenswrapper[4782]: I0202 11:23:13.821819 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:23:13 crc kubenswrapper[4782]: E0202 11:23:13.822827 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:23:27 crc kubenswrapper[4782]: I0202 11:23:27.821962 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:23:27 crc kubenswrapper[4782]: E0202 11:23:27.823040 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:23:38 crc kubenswrapper[4782]: I0202 11:23:38.822856 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:23:38 crc kubenswrapper[4782]: E0202 11:23:38.823685 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:23:50 crc kubenswrapper[4782]: I0202 11:23:50.827971 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:23:50 crc kubenswrapper[4782]: E0202 11:23:50.828784 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:24:05 crc kubenswrapper[4782]: I0202 11:24:05.821836 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:24:06 crc kubenswrapper[4782]: I0202 11:24:06.332277 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7"} Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.624144 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.647146 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.672621 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.730708 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.731270 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.731617 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.833594 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.833662 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.833724 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.834453 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.835132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.861239 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") pod \"redhat-operators-gtqf4\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:18 crc kubenswrapper[4782]: I0202 11:25:18.977494 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:19 crc kubenswrapper[4782]: I0202 11:25:19.663940 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:19 crc kubenswrapper[4782]: I0202 11:25:19.920206 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerStarted","Data":"a9280494df3ae0214f5fa0eaabf1e19bb7063ddf8696aadacc84bd731eb37e75"} Feb 02 11:25:20 crc kubenswrapper[4782]: I0202 11:25:20.931075 4782 generic.go:334] "Generic (PLEG): container finished" podID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerID="e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b" exitCode=0 Feb 02 11:25:20 crc kubenswrapper[4782]: I0202 11:25:20.931350 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerDied","Data":"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b"} Feb 02 11:25:20 crc kubenswrapper[4782]: I0202 11:25:20.933833 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:25:22 crc kubenswrapper[4782]: I0202 11:25:22.949874 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerStarted","Data":"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe"} Feb 02 11:25:27 crc kubenswrapper[4782]: I0202 11:25:27.995143 4782 generic.go:334] "Generic (PLEG): container finished" podID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerID="b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe" exitCode=0 Feb 02 11:25:27 crc kubenswrapper[4782]: I0202 11:25:27.995559 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerDied","Data":"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe"} Feb 02 11:25:29 crc kubenswrapper[4782]: I0202 11:25:29.009787 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerStarted","Data":"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e"} Feb 02 11:25:29 crc kubenswrapper[4782]: I0202 11:25:29.032225 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gtqf4" podStartSLOduration=3.255243515 podStartE2EDuration="11.032204447s" podCreationTimestamp="2026-02-02 11:25:18 +0000 UTC" firstStartedPulling="2026-02-02 11:25:20.933608457 +0000 UTC m=+2800.817801173" lastFinishedPulling="2026-02-02 11:25:28.710569389 +0000 UTC m=+2808.594762105" observedRunningTime="2026-02-02 11:25:29.0292086 +0000 UTC m=+2808.913401316" watchObservedRunningTime="2026-02-02 11:25:29.032204447 +0000 UTC m=+2808.916397163" Feb 02 11:25:38 crc kubenswrapper[4782]: I0202 11:25:38.978524 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:38 crc kubenswrapper[4782]: I0202 11:25:38.978967 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:39 crc kubenswrapper[4782]: I0202 11:25:39.024390 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:39 crc kubenswrapper[4782]: I0202 11:25:39.145260 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:39 crc kubenswrapper[4782]: I0202 11:25:39.261148 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.111154 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gtqf4" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="registry-server" containerID="cri-o://34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" gracePeriod=2 Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.619768 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.712533 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") pod \"ade4aa13-eb8f-45b6-930c-278af990ff9f\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.712633 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") pod \"ade4aa13-eb8f-45b6-930c-278af990ff9f\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.712703 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") pod \"ade4aa13-eb8f-45b6-930c-278af990ff9f\" (UID: \"ade4aa13-eb8f-45b6-930c-278af990ff9f\") " Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.713498 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities" (OuterVolumeSpecName: "utilities") pod "ade4aa13-eb8f-45b6-930c-278af990ff9f" (UID: "ade4aa13-eb8f-45b6-930c-278af990ff9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.720855 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2" (OuterVolumeSpecName: "kube-api-access-85br2") pod "ade4aa13-eb8f-45b6-930c-278af990ff9f" (UID: "ade4aa13-eb8f-45b6-930c-278af990ff9f"). InnerVolumeSpecName "kube-api-access-85br2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.814367 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85br2\" (UniqueName: \"kubernetes.io/projected/ade4aa13-eb8f-45b6-930c-278af990ff9f-kube-api-access-85br2\") on node \"crc\" DevicePath \"\"" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.814399 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.834131 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ade4aa13-eb8f-45b6-930c-278af990ff9f" (UID: "ade4aa13-eb8f-45b6-930c-278af990ff9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:25:41 crc kubenswrapper[4782]: I0202 11:25:41.916451 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade4aa13-eb8f-45b6-930c-278af990ff9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.122876 4782 generic.go:334] "Generic (PLEG): container finished" podID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerID="34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" exitCode=0 Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.122983 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerDied","Data":"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e"} Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.123079 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtqf4" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.123307 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtqf4" event={"ID":"ade4aa13-eb8f-45b6-930c-278af990ff9f","Type":"ContainerDied","Data":"a9280494df3ae0214f5fa0eaabf1e19bb7063ddf8696aadacc84bd731eb37e75"} Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.123336 4782 scope.go:117] "RemoveContainer" containerID="34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.149316 4782 scope.go:117] "RemoveContainer" containerID="b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.180337 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.203302 4782 scope.go:117] "RemoveContainer" containerID="e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.223453 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gtqf4"] Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.254296 4782 scope.go:117] "RemoveContainer" containerID="34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" Feb 02 11:25:42 crc kubenswrapper[4782]: E0202 11:25:42.254754 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e\": container with ID starting with 34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e not found: ID does not exist" containerID="34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.254794 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e"} err="failed to get container status \"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e\": rpc error: code = NotFound desc = could not find container \"34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e\": container with ID starting with 34b7fcba73eae3cb7411e94f6e79fe50a98149b76b61bae1c750b0fef6e8240e not found: ID does not exist" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.254822 4782 scope.go:117] "RemoveContainer" containerID="b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe" Feb 02 11:25:42 crc kubenswrapper[4782]: E0202 11:25:42.255089 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe\": container with ID starting with b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe not found: ID does not exist" containerID="b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.255115 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe"} err="failed to get container status \"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe\": rpc error: code = NotFound desc = could not find container \"b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe\": container with ID starting with b58b6262363b4306a0f1923537252a553b36dc1af9639e39a5b9603cd1bb7bbe not found: ID does not exist" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.255134 4782 scope.go:117] "RemoveContainer" containerID="e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b" Feb 02 11:25:42 crc kubenswrapper[4782]: E0202 11:25:42.255411 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b\": container with ID starting with e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b not found: ID does not exist" containerID="e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.255439 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b"} err="failed to get container status \"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b\": rpc error: code = NotFound desc = could not find container \"e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b\": container with ID starting with e8419bd9d1c054f466de047d2998d37428fa9634412ae7f33a6a4b81906c092b not found: ID does not exist" Feb 02 11:25:42 crc kubenswrapper[4782]: I0202 11:25:42.831944 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" path="/var/lib/kubelet/pods/ade4aa13-eb8f-45b6-930c-278af990ff9f/volumes" Feb 02 11:26:22 crc kubenswrapper[4782]: I0202 11:26:22.950957 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:26:22 crc kubenswrapper[4782]: I0202 11:26:22.951553 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.180433 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:33 crc kubenswrapper[4782]: E0202 11:26:33.181226 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="extract-content" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.181240 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="extract-content" Feb 02 11:26:33 crc kubenswrapper[4782]: E0202 11:26:33.181276 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="extract-utilities" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.181283 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="extract-utilities" Feb 02 11:26:33 crc kubenswrapper[4782]: E0202 11:26:33.181306 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="registry-server" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.181313 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="registry-server" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.181499 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade4aa13-eb8f-45b6-930c-278af990ff9f" containerName="registry-server" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.182688 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.198037 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.217550 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.217608 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.217692 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319264 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319325 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319361 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319876 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.319940 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.340553 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") pod \"certified-operators-z42rt\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:33 crc kubenswrapper[4782]: I0202 11:26:33.511748 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:34 crc kubenswrapper[4782]: I0202 11:26:34.133597 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:34 crc kubenswrapper[4782]: I0202 11:26:34.554093 4782 generic.go:334] "Generic (PLEG): container finished" podID="42224916-385d-4dd6-96c5-3e4080fac20e" containerID="426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1" exitCode=0 Feb 02 11:26:34 crc kubenswrapper[4782]: I0202 11:26:34.554153 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerDied","Data":"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1"} Feb 02 11:26:34 crc kubenswrapper[4782]: I0202 11:26:34.554187 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerStarted","Data":"cee203ab926e18b0e2174d06f18657c8b61e9bf7093328be556e238242433733"} Feb 02 11:26:35 crc kubenswrapper[4782]: I0202 11:26:35.564826 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerStarted","Data":"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d"} Feb 02 11:26:36 crc kubenswrapper[4782]: I0202 11:26:36.577470 4782 generic.go:334] "Generic (PLEG): container finished" podID="9b66a766-dc87-45dd-a611-d9a30c3f327e" containerID="4941491cf1df1bc7280b824efa5aa4ba9575dbca5ae4407e9126b0211ca2c981" exitCode=0 Feb 02 11:26:36 crc kubenswrapper[4782]: I0202 11:26:36.577552 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" event={"ID":"9b66a766-dc87-45dd-a611-d9a30c3f327e","Type":"ContainerDied","Data":"4941491cf1df1bc7280b824efa5aa4ba9575dbca5ae4407e9126b0211ca2c981"} Feb 02 11:26:36 crc kubenswrapper[4782]: I0202 11:26:36.583981 4782 generic.go:334] "Generic (PLEG): container finished" podID="42224916-385d-4dd6-96c5-3e4080fac20e" containerID="58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d" exitCode=0 Feb 02 11:26:36 crc kubenswrapper[4782]: I0202 11:26:36.584031 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerDied","Data":"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d"} Feb 02 11:26:37 crc kubenswrapper[4782]: I0202 11:26:37.594823 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerStarted","Data":"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777"} Feb 02 11:26:37 crc kubenswrapper[4782]: I0202 11:26:37.616828 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z42rt" podStartSLOduration=2.185732068 podStartE2EDuration="4.616809827s" podCreationTimestamp="2026-02-02 11:26:33 +0000 UTC" firstStartedPulling="2026-02-02 11:26:34.555609242 +0000 UTC m=+2874.439801958" lastFinishedPulling="2026-02-02 11:26:36.986687001 +0000 UTC m=+2876.870879717" observedRunningTime="2026-02-02 11:26:37.615012675 +0000 UTC m=+2877.499205391" watchObservedRunningTime="2026-02-02 11:26:37.616809827 +0000 UTC m=+2877.501002543" Feb 02 11:26:37 crc kubenswrapper[4782]: I0202 11:26:37.952682 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110423 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110493 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110524 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110672 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110699 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.110745 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") pod \"9b66a766-dc87-45dd-a611-d9a30c3f327e\" (UID: \"9b66a766-dc87-45dd-a611-d9a30c3f327e\") " Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.116352 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.116558 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph" (OuterVolumeSpecName: "ceph") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.123974 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv" (OuterVolumeSpecName: "kube-api-access-5b7cv") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "kube-api-access-5b7cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.137350 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.140434 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory" (OuterVolumeSpecName: "inventory") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.141329 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9b66a766-dc87-45dd-a611-d9a30c3f327e" (UID: "9b66a766-dc87-45dd-a611-d9a30c3f327e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213211 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213247 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213258 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213267 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213278 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b7cv\" (UniqueName: \"kubernetes.io/projected/9b66a766-dc87-45dd-a611-d9a30c3f327e-kube-api-access-5b7cv\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.213286 4782 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b66a766-dc87-45dd-a611-d9a30c3f327e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.603519 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.604060 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fjczj" event={"ID":"9b66a766-dc87-45dd-a611-d9a30c3f327e","Type":"ContainerDied","Data":"8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821"} Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.604090 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ed3893ae70fd1cc2806d2c6231e36eed31c1d7012844cba421ea45199589821" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.748508 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp"] Feb 02 11:26:38 crc kubenswrapper[4782]: E0202 11:26:38.749164 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b66a766-dc87-45dd-a611-d9a30c3f327e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.749253 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b66a766-dc87-45dd-a611-d9a30c3f327e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.749491 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b66a766-dc87-45dd-a611-d9a30c3f327e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.750147 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.755963 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jhgxt" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.756434 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.757627 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.759257 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.759598 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.759852 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.760006 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.760142 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.766797 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.805561 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp"] Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836286 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836352 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836416 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836482 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836520 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836548 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836588 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836617 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836718 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836784 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.836801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938364 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938410 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938473 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938491 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938566 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938591 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938613 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938668 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.938748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.939409 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.942049 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.942704 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.943140 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.943492 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.944164 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.948211 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.948330 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.948210 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.955122 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:38 crc kubenswrapper[4782]: I0202 11:26:38.958190 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:39 crc kubenswrapper[4782]: I0202 11:26:39.067175 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:26:39 crc kubenswrapper[4782]: I0202 11:26:39.578927 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp"] Feb 02 11:26:39 crc kubenswrapper[4782]: I0202 11:26:39.612510 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" event={"ID":"dc15a3e1-ea96-499f-a268-b633c15ec75b","Type":"ContainerStarted","Data":"90ee7ea00d96ed126531097af68cbbe31ad44d060c945efbf3734476778e8d22"} Feb 02 11:26:40 crc kubenswrapper[4782]: I0202 11:26:40.622016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" event={"ID":"dc15a3e1-ea96-499f-a268-b633c15ec75b","Type":"ContainerStarted","Data":"219026f19d82228ad62538b8507aa6156d86888ca3dd2d1b1e3d2da088041c78"} Feb 02 11:26:40 crc kubenswrapper[4782]: I0202 11:26:40.644279 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" podStartSLOduration=2.252804964 podStartE2EDuration="2.644258551s" podCreationTimestamp="2026-02-02 11:26:38 +0000 UTC" firstStartedPulling="2026-02-02 11:26:39.587519266 +0000 UTC m=+2879.471711982" lastFinishedPulling="2026-02-02 11:26:39.978972853 +0000 UTC m=+2879.863165569" observedRunningTime="2026-02-02 11:26:40.636031834 +0000 UTC m=+2880.520224540" watchObservedRunningTime="2026-02-02 11:26:40.644258551 +0000 UTC m=+2880.528451267" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.512703 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.513436 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.562953 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.689904 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:43 crc kubenswrapper[4782]: I0202 11:26:43.798982 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:45 crc kubenswrapper[4782]: I0202 11:26:45.656324 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z42rt" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="registry-server" containerID="cri-o://75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" gracePeriod=2 Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.102750 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.268810 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") pod \"42224916-385d-4dd6-96c5-3e4080fac20e\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.269551 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") pod \"42224916-385d-4dd6-96c5-3e4080fac20e\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.269819 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities" (OuterVolumeSpecName: "utilities") pod "42224916-385d-4dd6-96c5-3e4080fac20e" (UID: "42224916-385d-4dd6-96c5-3e4080fac20e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.270014 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") pod \"42224916-385d-4dd6-96c5-3e4080fac20e\" (UID: \"42224916-385d-4dd6-96c5-3e4080fac20e\") " Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.270521 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.275957 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8" (OuterVolumeSpecName: "kube-api-access-5mfc8") pod "42224916-385d-4dd6-96c5-3e4080fac20e" (UID: "42224916-385d-4dd6-96c5-3e4080fac20e"). InnerVolumeSpecName "kube-api-access-5mfc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.373475 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mfc8\" (UniqueName: \"kubernetes.io/projected/42224916-385d-4dd6-96c5-3e4080fac20e-kube-api-access-5mfc8\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.523934 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42224916-385d-4dd6-96c5-3e4080fac20e" (UID: "42224916-385d-4dd6-96c5-3e4080fac20e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.576870 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42224916-385d-4dd6-96c5-3e4080fac20e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665722 4782 generic.go:334] "Generic (PLEG): container finished" podID="42224916-385d-4dd6-96c5-3e4080fac20e" containerID="75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" exitCode=0 Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665763 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerDied","Data":"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777"} Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665783 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z42rt" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665808 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z42rt" event={"ID":"42224916-385d-4dd6-96c5-3e4080fac20e","Type":"ContainerDied","Data":"cee203ab926e18b0e2174d06f18657c8b61e9bf7093328be556e238242433733"} Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.665834 4782 scope.go:117] "RemoveContainer" containerID="75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.711516 4782 scope.go:117] "RemoveContainer" containerID="58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.720039 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.740169 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z42rt"] Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.762939 4782 scope.go:117] "RemoveContainer" containerID="426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.812599 4782 scope.go:117] "RemoveContainer" containerID="75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" Feb 02 11:26:46 crc kubenswrapper[4782]: E0202 11:26:46.813000 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777\": container with ID starting with 75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777 not found: ID does not exist" containerID="75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.813041 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777"} err="failed to get container status \"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777\": rpc error: code = NotFound desc = could not find container \"75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777\": container with ID starting with 75431271236e6ffdb11c3584bc2cf6fd69f18b82c6905aee5306f7add31cc777 not found: ID does not exist" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.813068 4782 scope.go:117] "RemoveContainer" containerID="58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d" Feb 02 11:26:46 crc kubenswrapper[4782]: E0202 11:26:46.813281 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d\": container with ID starting with 58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d not found: ID does not exist" containerID="58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.813309 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d"} err="failed to get container status \"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d\": rpc error: code = NotFound desc = could not find container \"58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d\": container with ID starting with 58aa8b1cdb7883c7e3b833e5ff99ea1a8ffadf4a95ded90b860b6e04f039585d not found: ID does not exist" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.813328 4782 scope.go:117] "RemoveContainer" containerID="426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1" Feb 02 11:26:46 crc kubenswrapper[4782]: E0202 11:26:46.813839 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1\": container with ID starting with 426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1 not found: ID does not exist" containerID="426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.815044 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1"} err="failed to get container status \"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1\": rpc error: code = NotFound desc = could not find container \"426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1\": container with ID starting with 426eba81cfc0dff55d0347ca53143b06fcf982c4e9f1d0fa63c91b08967c7fe1 not found: ID does not exist" Feb 02 11:26:46 crc kubenswrapper[4782]: I0202 11:26:46.834406 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" path="/var/lib/kubelet/pods/42224916-385d-4dd6-96c5-3e4080fac20e/volumes" Feb 02 11:26:52 crc kubenswrapper[4782]: I0202 11:26:52.951393 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:26:52 crc kubenswrapper[4782]: I0202 11:26:52.951999 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.952190 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.952994 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.953047 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.954221 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:27:22 crc kubenswrapper[4782]: I0202 11:27:22.954277 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7" gracePeriod=600 Feb 02 11:27:23 crc kubenswrapper[4782]: I0202 11:27:23.975409 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7" exitCode=0 Feb 02 11:27:23 crc kubenswrapper[4782]: I0202 11:27:23.975494 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7"} Feb 02 11:27:23 crc kubenswrapper[4782]: I0202 11:27:23.976082 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2"} Feb 02 11:27:23 crc kubenswrapper[4782]: I0202 11:27:23.976113 4782 scope.go:117] "RemoveContainer" containerID="5d4753fce570617e864276d34772208f83d3fd6766212b5ad5f002f122bc2ca9" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.866669 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:27:57 crc kubenswrapper[4782]: E0202 11:27:57.869599 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="registry-server" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.869633 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="registry-server" Feb 02 11:27:57 crc kubenswrapper[4782]: E0202 11:27:57.869698 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="extract-utilities" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.869709 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="extract-utilities" Feb 02 11:27:57 crc kubenswrapper[4782]: E0202 11:27:57.869744 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="extract-content" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.869752 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="extract-content" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.869996 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="42224916-385d-4dd6-96c5-3e4080fac20e" containerName="registry-server" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.871464 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.893462 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.962906 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.962980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:57 crc kubenswrapper[4782]: I0202 11:27:57.963279 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.065286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.065345 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.065418 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.065998 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.066279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.088843 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") pod \"community-operators-8n7qq\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.193213 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:27:58 crc kubenswrapper[4782]: I0202 11:27:58.931221 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:27:59 crc kubenswrapper[4782]: I0202 11:27:59.313912 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerStarted","Data":"3cf5ab6f19cc2e3fa21dd2c3eeeed9e8b46dc188b00ed0b78a3f1058f84aa0d1"} Feb 02 11:28:00 crc kubenswrapper[4782]: I0202 11:28:00.323346 4782 generic.go:334] "Generic (PLEG): container finished" podID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerID="d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835" exitCode=0 Feb 02 11:28:00 crc kubenswrapper[4782]: I0202 11:28:00.323402 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerDied","Data":"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835"} Feb 02 11:28:02 crc kubenswrapper[4782]: I0202 11:28:02.338090 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerStarted","Data":"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c"} Feb 02 11:28:06 crc kubenswrapper[4782]: I0202 11:28:06.391733 4782 generic.go:334] "Generic (PLEG): container finished" podID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerID="a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c" exitCode=0 Feb 02 11:28:06 crc kubenswrapper[4782]: I0202 11:28:06.391920 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerDied","Data":"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c"} Feb 02 11:28:08 crc kubenswrapper[4782]: I0202 11:28:08.416039 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerStarted","Data":"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad"} Feb 02 11:28:08 crc kubenswrapper[4782]: I0202 11:28:08.448575 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8n7qq" podStartSLOduration=4.637970459 podStartE2EDuration="11.448554852s" podCreationTimestamp="2026-02-02 11:27:57 +0000 UTC" firstStartedPulling="2026-02-02 11:28:00.325516069 +0000 UTC m=+2960.209708785" lastFinishedPulling="2026-02-02 11:28:07.136100462 +0000 UTC m=+2967.020293178" observedRunningTime="2026-02-02 11:28:08.442186469 +0000 UTC m=+2968.326379195" watchObservedRunningTime="2026-02-02 11:28:08.448554852 +0000 UTC m=+2968.332747568" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.194755 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.195199 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.241493 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.535088 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:18 crc kubenswrapper[4782]: I0202 11:28:18.588265 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:28:20 crc kubenswrapper[4782]: I0202 11:28:20.524955 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8n7qq" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="registry-server" containerID="cri-o://1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" gracePeriod=2 Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.013804 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.124708 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") pod \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.125098 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") pod \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.125447 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") pod \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\" (UID: \"9fdc37e6-68ac-49ab-9c4c-d72c777a3002\") " Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.130003 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities" (OuterVolumeSpecName: "utilities") pod "9fdc37e6-68ac-49ab-9c4c-d72c777a3002" (UID: "9fdc37e6-68ac-49ab-9c4c-d72c777a3002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.134513 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9" (OuterVolumeSpecName: "kube-api-access-5zbp9") pod "9fdc37e6-68ac-49ab-9c4c-d72c777a3002" (UID: "9fdc37e6-68ac-49ab-9c4c-d72c777a3002"). InnerVolumeSpecName "kube-api-access-5zbp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.180617 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fdc37e6-68ac-49ab-9c4c-d72c777a3002" (UID: "9fdc37e6-68ac-49ab-9c4c-d72c777a3002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.228163 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.228208 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zbp9\" (UniqueName: \"kubernetes.io/projected/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-kube-api-access-5zbp9\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.228220 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fdc37e6-68ac-49ab-9c4c-d72c777a3002-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551169 4782 generic.go:334] "Generic (PLEG): container finished" podID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerID="1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" exitCode=0 Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551216 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerDied","Data":"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad"} Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551246 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8n7qq" event={"ID":"9fdc37e6-68ac-49ab-9c4c-d72c777a3002","Type":"ContainerDied","Data":"3cf5ab6f19cc2e3fa21dd2c3eeeed9e8b46dc188b00ed0b78a3f1058f84aa0d1"} Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551264 4782 scope.go:117] "RemoveContainer" containerID="1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.551307 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8n7qq" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.574585 4782 scope.go:117] "RemoveContainer" containerID="a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.590609 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.598569 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8n7qq"] Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.604095 4782 scope.go:117] "RemoveContainer" containerID="d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.650179 4782 scope.go:117] "RemoveContainer" containerID="1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" Feb 02 11:28:21 crc kubenswrapper[4782]: E0202 11:28:21.650851 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad\": container with ID starting with 1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad not found: ID does not exist" containerID="1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.650898 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad"} err="failed to get container status \"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad\": rpc error: code = NotFound desc = could not find container \"1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad\": container with ID starting with 1bb38712206c216ef3d1c267467eac525ef4277d74a7025f6c527a9253281dad not found: ID does not exist" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.650929 4782 scope.go:117] "RemoveContainer" containerID="a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c" Feb 02 11:28:21 crc kubenswrapper[4782]: E0202 11:28:21.651409 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c\": container with ID starting with a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c not found: ID does not exist" containerID="a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.651517 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c"} err="failed to get container status \"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c\": rpc error: code = NotFound desc = could not find container \"a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c\": container with ID starting with a9e083416887a88931b4e1566d7f494ae6a8fc1565f8d876a8599c579c97146c not found: ID does not exist" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.651624 4782 scope.go:117] "RemoveContainer" containerID="d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835" Feb 02 11:28:21 crc kubenswrapper[4782]: E0202 11:28:21.652246 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835\": container with ID starting with d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835 not found: ID does not exist" containerID="d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835" Feb 02 11:28:21 crc kubenswrapper[4782]: I0202 11:28:21.652280 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835"} err="failed to get container status \"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835\": rpc error: code = NotFound desc = could not find container \"d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835\": container with ID starting with d56c5119cc543a1ee49b6e1abc4b73bae50c68c4853bdb275aaa3b22819ce835 not found: ID does not exist" Feb 02 11:28:22 crc kubenswrapper[4782]: I0202 11:28:22.832549 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" path="/var/lib/kubelet/pods/9fdc37e6-68ac-49ab-9c4c-d72c777a3002/volumes" Feb 02 11:29:26 crc kubenswrapper[4782]: I0202 11:29:26.100722 4782 generic.go:334] "Generic (PLEG): container finished" podID="dc15a3e1-ea96-499f-a268-b633c15ec75b" containerID="219026f19d82228ad62538b8507aa6156d86888ca3dd2d1b1e3d2da088041c78" exitCode=0 Feb 02 11:29:26 crc kubenswrapper[4782]: I0202 11:29:26.100852 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" event={"ID":"dc15a3e1-ea96-499f-a268-b633c15ec75b","Type":"ContainerDied","Data":"219026f19d82228ad62538b8507aa6156d86888ca3dd2d1b1e3d2da088041c78"} Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.767247 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.883826 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.883875 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.883909 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884564 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884631 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884689 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884716 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884769 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884798 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884824 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.884922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") pod \"dc15a3e1-ea96-499f-a268-b633c15ec75b\" (UID: \"dc15a3e1-ea96-499f-a268-b633c15ec75b\") " Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.891990 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph" (OuterVolumeSpecName: "ceph") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.893145 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc" (OuterVolumeSpecName: "kube-api-access-rvnkc") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "kube-api-access-rvnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.896743 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.926129 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory" (OuterVolumeSpecName: "inventory") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.930987 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.935731 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.942178 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.955849 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.959986 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.980152 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:27 crc kubenswrapper[4782]: I0202 11:29:27.982873 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "dc15a3e1-ea96-499f-a268-b633c15ec75b" (UID: "dc15a3e1-ea96-499f-a268-b633c15ec75b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004165 4782 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004210 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004223 4782 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004245 4782 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004276 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/dc15a3e1-ea96-499f-a268-b633c15ec75b-kube-api-access-rvnkc\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004287 4782 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004300 4782 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004310 4782 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004321 4782 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004331 4782 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.004345 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dc15a3e1-ea96-499f-a268-b633c15ec75b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.124207 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" event={"ID":"dc15a3e1-ea96-499f-a268-b633c15ec75b","Type":"ContainerDied","Data":"90ee7ea00d96ed126531097af68cbbe31ad44d060c945efbf3734476778e8d22"} Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.124269 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90ee7ea00d96ed126531097af68cbbe31ad44d060c945efbf3734476778e8d22" Feb 02 11:29:28 crc kubenswrapper[4782]: I0202 11:29:28.124298 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.158456 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:29:44 crc kubenswrapper[4782]: E0202 11:29:44.159530 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc15a3e1-ea96-499f-a268-b633c15ec75b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159550 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc15a3e1-ea96-499f-a268-b633c15ec75b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:44 crc kubenswrapper[4782]: E0202 11:29:44.159564 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="extract-content" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159572 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="extract-content" Feb 02 11:29:44 crc kubenswrapper[4782]: E0202 11:29:44.159604 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="registry-server" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159612 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="registry-server" Feb 02 11:29:44 crc kubenswrapper[4782]: E0202 11:29:44.159625 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="extract-utilities" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159633 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="extract-utilities" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159898 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc15a3e1-ea96-499f-a268-b633c15ec75b" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.159915 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fdc37e6-68ac-49ab-9c4c-d72c777a3002" containerName="registry-server" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.190218 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.190348 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.193359 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.194182 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.199458 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.201629 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.204996 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.236036 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.394980 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395040 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395112 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-scripts\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395143 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395178 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395271 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntp9\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-kube-api-access-lntp9\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395350 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395366 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-ceph\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395410 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395443 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395472 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395513 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395536 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395601 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-lib-modules\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395668 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395709 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395748 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395801 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5l5g\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-kube-api-access-p5l5g\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395822 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395840 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395868 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-dev\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395886 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395909 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395950 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.395981 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396012 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-run\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396069 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396091 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.396119 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-sys\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497352 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497404 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497687 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497748 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497801 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497819 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-lib-modules\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497844 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497872 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497912 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497940 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497974 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5l5g\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-kube-api-access-p5l5g\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.497999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498022 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498047 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-dev\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498071 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498097 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498149 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498194 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498221 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-run\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498253 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498280 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-sys\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498356 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498373 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498414 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-dev\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498436 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498601 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498704 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.498732 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499272 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-run\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499345 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499406 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499434 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-scripts\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499471 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499508 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lntp9\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-kube-api-access-lntp9\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499534 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499558 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-ceph\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499677 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-lib-modules\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499713 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.499747 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.502134 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.503715 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504566 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504728 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504784 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504812 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-sys\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.504847 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.505246 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-ceph\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.505317 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.505597 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.505900 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.507033 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.507573 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.508407 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-config-data\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.508614 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.509965 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-scripts\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.510462 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.525848 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.544247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5l5g\" (UniqueName: \"kubernetes.io/projected/d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a-kube-api-access-p5l5g\") pod \"cinder-backup-0\" (UID: \"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a\") " pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.553558 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lntp9\" (UniqueName: \"kubernetes.io/projected/5d7df751-5d4d-4ce4-83c9-70abd18fc7c7-kube-api-access-lntp9\") pod \"cinder-volume-volume1-0\" (UID: \"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7\") " pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.814765 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.824033 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.916861 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.918617 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-88lt6" Feb 02 11:29:44 crc kubenswrapper[4782]: I0202 11:29:44.933603 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.012707 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.012758 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.041409 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.042590 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.050191 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.072273 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.074203 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.077383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.084229 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.084550 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.085321 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-57vkh" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.100794 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.114912 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.114962 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.115770 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.119856 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.173491 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") pod \"manila-db-create-88lt6\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.178134 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.179657 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.186309 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.186497 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.213606 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.216850 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.216928 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.216998 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217039 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217098 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217118 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217171 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217190 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217243 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217257 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.217329 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.276035 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-88lt6" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.312821 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.314348 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323098 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323165 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323187 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323233 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323280 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323329 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323355 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323379 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323414 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323434 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323524 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.323590 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324106 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324427 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324469 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324707 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324747 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324768 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.324855 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.350927 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.355346 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.355770 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-v8dpd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.357578 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.358061 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.363343 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.363722 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") pod \"manila-61e9-account-create-update-vjlvv\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.364403 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.367911 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.367989 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.379025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.387298 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.419585 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.441246 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489527 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489601 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489655 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489661 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.489677 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.490195 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.490594 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.437620 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.491445 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.492367 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.492739 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.492954 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.505148 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.509912 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.510003 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.510100 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.510211 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.530064 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.545665 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.563286 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.583813 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: E0202 11:29:45.585082 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance kube-api-access-l8rxx], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="7a649cbf-74c3-4519-a14f-92815ec8a297" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.592472 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.608660 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.612972 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.613036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.613088 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.613129 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.613190 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.626103 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.652777 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.654903 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.688367 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.696909 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.697818 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715625 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715709 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715774 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715901 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.715972 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.716016 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.716059 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.716087 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.716131 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.717636 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.717912 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.718359 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.734141 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.743575 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") pod \"horizon-5cc68dfb67-6l9rd\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.744265 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.820929 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.821018 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.821051 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.821108 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.821138 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.822288 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.823735 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.824236 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.828101 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.847362 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") pod \"horizon-5545895985-nbz88\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:45 crc kubenswrapper[4782]: I0202 11:29:45.994244 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.002429 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.017167 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:29:46 crc kubenswrapper[4782]: W0202 11:29:46.076968 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2b6a5bd_a9ae_4bc9_91ed_ca1ac5d7489a.slice/crio-309de090efa190d34ce86518617a66f401eb98c29502be404b5cf822787a7a49 WatchSource:0}: Error finding container 309de090efa190d34ce86518617a66f401eb98c29502be404b5cf822787a7a49: Status 404 returned error can't find the container with id 309de090efa190d34ce86518617a66f401eb98c29502be404b5cf822787a7a49 Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.109350 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.267531 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.312321 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7","Type":"ContainerStarted","Data":"415b4b7bd0fc7798222630c7a319d125d0215742edd5f60525165470c16cbae1"} Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.313906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a","Type":"ContainerStarted","Data":"309de090efa190d34ce86518617a66f401eb98c29502be404b5cf822787a7a49"} Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.313929 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.359175 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.437975 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438087 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438123 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438155 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438345 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438373 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438413 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438446 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.438488 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") pod \"7a649cbf-74c3-4519-a14f-92815ec8a297\" (UID: \"7a649cbf-74c3-4519-a14f-92815ec8a297\") " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.440671 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs" (OuterVolumeSpecName: "logs") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.440688 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.446356 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.448526 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts" (OuterVolumeSpecName: "scripts") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.449320 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx" (OuterVolumeSpecName: "kube-api-access-l8rxx") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "kube-api-access-l8rxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.452831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.454269 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.455499 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph" (OuterVolumeSpecName: "ceph") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.455824 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data" (OuterVolumeSpecName: "config-data") pod "7a649cbf-74c3-4519-a14f-92815ec8a297" (UID: "7a649cbf-74c3-4519-a14f-92815ec8a297"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.540654 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541628 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541691 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541793 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541812 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541824 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rxx\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-kube-api-access-l8rxx\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541837 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a649cbf-74c3-4519-a14f-92815ec8a297-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541848 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7a649cbf-74c3-4519-a14f-92815ec8a297-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541858 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.541869 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a649cbf-74c3-4519-a14f-92815ec8a297-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.618432 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 11:29:46 crc kubenswrapper[4782]: I0202 11:29:46.643836 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:46 crc kubenswrapper[4782]: W0202 11:29:46.987801 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda41f7244_284a_4ffc_9243_1b6748d57f86.slice/crio-2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383 WatchSource:0}: Error finding container 2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383: Status 404 returned error can't find the container with id 2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383 Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.018300 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.165791 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:29:47 crc kubenswrapper[4782]: W0202 11:29:47.301104 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00eb57a6_b941_443f_9b8a_644c0389b562.slice/crio-e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd WatchSource:0}: Error finding container e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd: Status 404 returned error can't find the container with id e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.330843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerStarted","Data":"e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.331996 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerStarted","Data":"2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.343106 4782 generic.go:334] "Generic (PLEG): container finished" podID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" containerID="18235f2d52d1acb53abcc5d69239ea08135f49af22250cec7c915e6b6af27b05" exitCode=0 Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.343818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-61e9-account-create-update-vjlvv" event={"ID":"7260512c-a397-4b18-ab4d-a97e7dbf50d9","Type":"ContainerDied","Data":"18235f2d52d1acb53abcc5d69239ea08135f49af22250cec7c915e6b6af27b05"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.343855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-61e9-account-create-update-vjlvv" event={"ID":"7260512c-a397-4b18-ab4d-a97e7dbf50d9","Type":"ContainerStarted","Data":"c7e76ee0f8108c2e88436d6fb1e53203db1db23de055ae76bb7a9b8f1dc59596"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.352936 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerStarted","Data":"78a7fb858e48a2d7c1668bcef174fb8172e784df949e22963608e184d25f8fba"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.363998 4782 generic.go:334] "Generic (PLEG): container finished" podID="d9a2fa32-7949-4dbe-8e51-49627e08f051" containerID="22ee95619a6ae6669166c5388f7644833a8f50918632409a8660abf992c5c5da" exitCode=0 Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.364092 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.364624 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-88lt6" event={"ID":"d9a2fa32-7949-4dbe-8e51-49627e08f051","Type":"ContainerDied","Data":"22ee95619a6ae6669166c5388f7644833a8f50918632409a8660abf992c5c5da"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.364680 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-88lt6" event={"ID":"d9a2fa32-7949-4dbe-8e51-49627e08f051","Type":"ContainerStarted","Data":"c7b6f3804e585f1815aa2f7a3dba4f157933b0f2077449b05093c8d992d9ba41"} Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.639505 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.675741 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.688569 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.690204 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.698056 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.700816 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.700996 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798786 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798844 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798899 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798958 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.798996 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.799017 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.799047 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.799072 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.900921 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901287 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901338 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901370 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901434 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901465 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901538 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901563 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.901785 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.903940 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.904279 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.904791 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.915247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.915907 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.917742 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.918104 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.936437 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.947250 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:47 crc kubenswrapper[4782]: I0202 11:29:47.980604 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.106120 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.260457 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.285212 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.287106 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.291938 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323236 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323289 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323399 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323455 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323495 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323533 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.323565 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.375572 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.397278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerStarted","Data":"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.420627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a","Type":"ContainerStarted","Data":"291036b9d7a925a04fafa224e3c653b902ccf0c28788dd6267314c7c09b26095"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.420697 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a","Type":"ContainerStarted","Data":"2a7a53b367855ba1b0419635d2604ea82790291a09d021a941102e78627a9e21"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425066 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425136 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425181 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425210 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425252 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.425405 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.429703 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.431905 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.432222 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.452781 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.453470 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.479383 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.491713 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.494926 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.224472362 podStartE2EDuration="4.494904245s" podCreationTimestamp="2026-02-02 11:29:44 +0000 UTC" firstStartedPulling="2026-02-02 11:29:46.109827015 +0000 UTC m=+3065.994019731" lastFinishedPulling="2026-02-02 11:29:47.380258898 +0000 UTC m=+3067.264451614" observedRunningTime="2026-02-02 11:29:48.460590679 +0000 UTC m=+3068.344783415" watchObservedRunningTime="2026-02-02 11:29:48.494904245 +0000 UTC m=+3068.379096961" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.507506 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") pod \"horizon-78d997b864-7sqws\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.520766 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7","Type":"ContainerStarted","Data":"887806dc03ca175efb88ba8d7004dc8bd9d13d65dcdac85ca4654bde6853e624"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.521287 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5d7df751-5d4d-4ce4-83c9-70abd18fc7c7","Type":"ContainerStarted","Data":"5322598c674dda25fa507428fc9cc7c4897c935564e4f76434efb13a262333db"} Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.579735 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5665456548-9x6qh"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.581258 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.614700 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5665456548-9x6qh"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.626175 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.629251 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.397869664 podStartE2EDuration="4.629231524s" podCreationTimestamp="2026-02-02 11:29:44 +0000 UTC" firstStartedPulling="2026-02-02 11:29:45.790955273 +0000 UTC m=+3065.675147989" lastFinishedPulling="2026-02-02 11:29:47.022317133 +0000 UTC m=+3066.906509849" observedRunningTime="2026-02-02 11:29:48.575404417 +0000 UTC m=+3068.459597163" watchObservedRunningTime="2026-02-02 11:29:48.629231524 +0000 UTC m=+3068.513424240" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.644970 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dhck\" (UniqueName: \"kubernetes.io/projected/306e30f3-8fe7-427e-b8ff-309a561dda88-kube-api-access-7dhck\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645032 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-secret-key\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645095 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-scripts\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645155 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-tls-certs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645263 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-combined-ca-bundle\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645444 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-config-data\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.645611 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306e30f3-8fe7-427e-b8ff-309a561dda88-logs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747685 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306e30f3-8fe7-427e-b8ff-309a561dda88-logs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747794 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dhck\" (UniqueName: \"kubernetes.io/projected/306e30f3-8fe7-427e-b8ff-309a561dda88-kube-api-access-7dhck\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747825 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-secret-key\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747890 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-scripts\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747918 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-tls-certs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747963 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-combined-ca-bundle\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.747999 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-config-data\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.749149 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-config-data\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.749398 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/306e30f3-8fe7-427e-b8ff-309a561dda88-logs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.751559 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.762416 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/306e30f3-8fe7-427e-b8ff-309a561dda88-scripts\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.771630 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-tls-certs\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.773351 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dhck\" (UniqueName: \"kubernetes.io/projected/306e30f3-8fe7-427e-b8ff-309a561dda88-kube-api-access-7dhck\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.784247 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-horizon-secret-key\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.785769 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306e30f3-8fe7-427e-b8ff-309a561dda88-combined-ca-bundle\") pod \"horizon-5665456548-9x6qh\" (UID: \"306e30f3-8fe7-427e-b8ff-309a561dda88\") " pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.858063 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a649cbf-74c3-4519-a14f-92815ec8a297" path="/var/lib/kubelet/pods/7a649cbf-74c3-4519-a14f-92815ec8a297/volumes" Feb 02 11:29:48 crc kubenswrapper[4782]: I0202 11:29:48.944900 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.342003 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.610781 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerStarted","Data":"75f9c9716f7604da4d575e0f1b2688df7bd2eada642f2f22b1f7f24cb9d5e5c4"} Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.803347 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-88lt6" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.810274 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.816865 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.825565 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.935293 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") pod \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.935883 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") pod \"d9a2fa32-7949-4dbe-8e51-49627e08f051\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.936341 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") pod \"d9a2fa32-7949-4dbe-8e51-49627e08f051\" (UID: \"d9a2fa32-7949-4dbe-8e51-49627e08f051\") " Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.936726 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") pod \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\" (UID: \"7260512c-a397-4b18-ab4d-a97e7dbf50d9\") " Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.944338 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7260512c-a397-4b18-ab4d-a97e7dbf50d9" (UID: "7260512c-a397-4b18-ab4d-a97e7dbf50d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.945698 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7260512c-a397-4b18-ab4d-a97e7dbf50d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.953259 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9a2fa32-7949-4dbe-8e51-49627e08f051" (UID: "d9a2fa32-7949-4dbe-8e51-49627e08f051"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.957066 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw" (OuterVolumeSpecName: "kube-api-access-hjldw") pod "d9a2fa32-7949-4dbe-8e51-49627e08f051" (UID: "d9a2fa32-7949-4dbe-8e51-49627e08f051"). InnerVolumeSpecName "kube-api-access-hjldw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:49 crc kubenswrapper[4782]: I0202 11:29:49.964826 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm" (OuterVolumeSpecName: "kube-api-access-rndvm") pod "7260512c-a397-4b18-ab4d-a97e7dbf50d9" (UID: "7260512c-a397-4b18-ab4d-a97e7dbf50d9"). InnerVolumeSpecName "kube-api-access-rndvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.048845 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjldw\" (UniqueName: \"kubernetes.io/projected/d9a2fa32-7949-4dbe-8e51-49627e08f051-kube-api-access-hjldw\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.048870 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndvm\" (UniqueName: \"kubernetes.io/projected/7260512c-a397-4b18-ab4d-a97e7dbf50d9-kube-api-access-rndvm\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.048881 4782 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9a2fa32-7949-4dbe-8e51-49627e08f051-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.059977 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.085771 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5665456548-9x6qh"] Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.635833 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerStarted","Data":"d24d9db9a798247d6fbcd136dc3f9d15a710d6aee5946c313b4ac9b4fb5bc96d"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.644834 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-88lt6" event={"ID":"d9a2fa32-7949-4dbe-8e51-49627e08f051","Type":"ContainerDied","Data":"c7b6f3804e585f1815aa2f7a3dba4f157933b0f2077449b05093c8d992d9ba41"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.644933 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b6f3804e585f1815aa2f7a3dba4f157933b0f2077449b05093c8d992d9ba41" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.645009 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-88lt6" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.651227 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerStarted","Data":"f4ea7ab54d316a4c5aa410fefb5964e41975add1b7b4b663fd3e5e10d2e6f010"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.665864 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerStarted","Data":"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.666801 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-log" containerID="cri-o://c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" gracePeriod=30 Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.666821 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-httpd" containerID="cri-o://edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" gracePeriod=30 Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.676907 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-61e9-account-create-update-vjlvv" event={"ID":"7260512c-a397-4b18-ab4d-a97e7dbf50d9","Type":"ContainerDied","Data":"c7e76ee0f8108c2e88436d6fb1e53203db1db23de055ae76bb7a9b8f1dc59596"} Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.677023 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e76ee0f8108c2e88436d6fb1e53203db1db23de055ae76bb7a9b8f1dc59596" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.677251 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-61e9-account-create-update-vjlvv" Feb 02 11:29:50 crc kubenswrapper[4782]: I0202 11:29:50.697802 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.697780638 podStartE2EDuration="7.697780638s" podCreationTimestamp="2026-02-02 11:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:29:50.694589656 +0000 UTC m=+3070.578782372" watchObservedRunningTime="2026-02-02 11:29:50.697780638 +0000 UTC m=+3070.581973354" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.690107 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.699570 4782 generic.go:334] "Generic (PLEG): container finished" podID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" exitCode=143 Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.699600 4782 generic.go:334] "Generic (PLEG): container finished" podID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" exitCode=143 Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.699630 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerDied","Data":"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460"} Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.700005 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerDied","Data":"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188"} Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.700023 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a41f7244-284a-4ffc-9243-1b6748d57f86","Type":"ContainerDied","Data":"2467e6d6f5cb460604f0569a014d33d8ae6d53fd765a65386ea59ec098228383"} Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.700040 4782 scope.go:117] "RemoveContainer" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.700162 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.708247 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerStarted","Data":"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc"} Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795065 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795124 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795154 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795186 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795244 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795271 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795387 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795441 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795462 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") pod \"a41f7244-284a-4ffc-9243-1b6748d57f86\" (UID: \"a41f7244-284a-4ffc-9243-1b6748d57f86\") " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.795700 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.796092 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs" (OuterVolumeSpecName: "logs") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.796514 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.796537 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a41f7244-284a-4ffc-9243-1b6748d57f86-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.800767 4782 scope.go:117] "RemoveContainer" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.812415 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph" (OuterVolumeSpecName: "ceph") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.814804 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts" (OuterVolumeSpecName: "scripts") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.817935 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.829438 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq" (OuterVolumeSpecName: "kube-api-access-5c7wq") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "kube-api-access-5c7wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.852630 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898880 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898912 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c7wq\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-kube-api-access-5c7wq\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898935 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898946 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a41f7244-284a-4ffc-9243-1b6748d57f86-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.898954 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.933784 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data" (OuterVolumeSpecName: "config-data") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.945291 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.956104 4782 scope.go:117] "RemoveContainer" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" Feb 02 11:29:51 crc kubenswrapper[4782]: E0202 11:29:51.956967 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": container with ID starting with edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460 not found: ID does not exist" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.957001 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460"} err="failed to get container status \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": rpc error: code = NotFound desc = could not find container \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": container with ID starting with edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460 not found: ID does not exist" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.957031 4782 scope.go:117] "RemoveContainer" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" Feb 02 11:29:51 crc kubenswrapper[4782]: E0202 11:29:51.958083 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": container with ID starting with c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188 not found: ID does not exist" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.958108 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188"} err="failed to get container status \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": rpc error: code = NotFound desc = could not find container \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": container with ID starting with c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188 not found: ID does not exist" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.958120 4782 scope.go:117] "RemoveContainer" containerID="edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.958587 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460"} err="failed to get container status \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": rpc error: code = NotFound desc = could not find container \"edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460\": container with ID starting with edbeb8dc2f5e5a9d0a9a56b8eea964a589c0c0597fa125924f087089167e4460 not found: ID does not exist" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.958631 4782 scope.go:117] "RemoveContainer" containerID="c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188" Feb 02 11:29:51 crc kubenswrapper[4782]: I0202 11:29:51.960351 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188"} err="failed to get container status \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": rpc error: code = NotFound desc = could not find container \"c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188\": container with ID starting with c1ca84f18a2c7c73979765c611fce926c78a02310d29f337074d14d783355188 not found: ID does not exist" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.004367 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.004432 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.006341 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a41f7244-284a-4ffc-9243-1b6748d57f86" (UID: "a41f7244-284a-4ffc-9243-1b6748d57f86"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.106886 4782 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a41f7244-284a-4ffc-9243-1b6748d57f86-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.382226 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.409662 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.428615 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:52 crc kubenswrapper[4782]: E0202 11:29:52.429074 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a2fa32-7949-4dbe-8e51-49627e08f051" containerName="mariadb-database-create" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429086 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a2fa32-7949-4dbe-8e51-49627e08f051" containerName="mariadb-database-create" Feb 02 11:29:52 crc kubenswrapper[4782]: E0202 11:29:52.429103 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" containerName="mariadb-account-create-update" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429110 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" containerName="mariadb-account-create-update" Feb 02 11:29:52 crc kubenswrapper[4782]: E0202 11:29:52.429131 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-httpd" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429145 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-httpd" Feb 02 11:29:52 crc kubenswrapper[4782]: E0202 11:29:52.429167 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-log" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429174 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-log" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429341 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-log" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429360 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" containerName="mariadb-account-create-update" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429397 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a2fa32-7949-4dbe-8e51-49627e08f051" containerName="mariadb-database-create" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.429412 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" containerName="glance-httpd" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.430539 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.436440 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.436680 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.470119 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.523759 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.523846 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-logs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.523950 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.523986 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524008 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524036 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524057 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524085 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr9t\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-kube-api-access-plr9t\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.524162 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627225 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627392 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627421 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627545 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627562 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627588 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr9t\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-kube-api-access-plr9t\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.627920 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628064 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628419 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628556 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-logs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc86717-3e71-440c-a8f4-9cd4480e46d2-logs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.628606 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.637037 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.642480 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.646666 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.657376 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.667139 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc86717-3e71-440c-a8f4-9cd4480e46d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.677301 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr9t\" (UniqueName: \"kubernetes.io/projected/fdc86717-3e71-440c-a8f4-9cd4480e46d2-kube-api-access-plr9t\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.769901 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"fdc86717-3e71-440c-a8f4-9cd4480e46d2\") " pod="openstack/glance-default-external-api-0" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.816078 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerStarted","Data":"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6"} Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.816250 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-log" containerID="cri-o://4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" gracePeriod=30 Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.816881 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-httpd" containerID="cri-o://d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" gracePeriod=30 Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.862977 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41f7244-284a-4ffc-9243-1b6748d57f86" path="/var/lib/kubelet/pods/a41f7244-284a-4ffc-9243-1b6748d57f86/volumes" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.864797 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.86477074 podStartE2EDuration="5.86477074s" podCreationTimestamp="2026-02-02 11:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:29:52.853191587 +0000 UTC m=+3072.737384303" watchObservedRunningTime="2026-02-02 11:29:52.86477074 +0000 UTC m=+3072.748963466" Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.951839 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:29:52 crc kubenswrapper[4782]: I0202 11:29:52.951896 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.075305 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.515755 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569376 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569591 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569732 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569760 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569851 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569874 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569922 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.569940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.570030 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") pod \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\" (UID: \"8dc11dda-830a-4b93-b670-e3fabc7b9c28\") " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.572072 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.590104 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs" (OuterVolumeSpecName: "logs") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.611774 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts" (OuterVolumeSpecName: "scripts") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.616396 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.635851 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5" (OuterVolumeSpecName: "kube-api-access-thdm5") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "kube-api-access-thdm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.638829 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph" (OuterVolumeSpecName: "ceph") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678775 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678848 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678864 4782 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8dc11dda-830a-4b93-b670-e3fabc7b9c28-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678874 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thdm5\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-kube-api-access-thdm5\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678885 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.678892 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8dc11dda-830a-4b93-b670-e3fabc7b9c28-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.711272 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.734223 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.785515 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.785554 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.795998 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data" (OuterVolumeSpecName: "config-data") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.796025 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8dc11dda-830a-4b93-b670-e3fabc7b9c28" (UID: "8dc11dda-830a-4b93-b670-e3fabc7b9c28"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.887349 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.887387 4782 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dc11dda-830a-4b93-b670-e3fabc7b9c28-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.904979 4782 generic.go:334] "Generic (PLEG): container finished" podID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" exitCode=143 Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905024 4782 generic.go:334] "Generic (PLEG): container finished" podID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" exitCode=143 Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905049 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerDied","Data":"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6"} Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905081 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerDied","Data":"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc"} Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905092 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8dc11dda-830a-4b93-b670-e3fabc7b9c28","Type":"ContainerDied","Data":"75f9c9716f7604da4d575e0f1b2688df7bd2eada642f2f22b1f7f24cb9d5e5c4"} Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905118 4782 scope.go:117] "RemoveContainer" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.905137 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:53 crc kubenswrapper[4782]: I0202 11:29:53.994097 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.009471 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.032755 4782 scope.go:117] "RemoveContainer" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.052680 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: E0202 11:29:54.053313 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-httpd" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.059790 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-httpd" Feb 02 11:29:54 crc kubenswrapper[4782]: E0202 11:29:54.059960 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-log" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.060058 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-log" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.060418 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-httpd" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.060514 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" containerName="glance-log" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.061710 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.073246 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.073583 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.085146 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093258 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093304 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093614 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093778 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093915 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.093946 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.094223 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5bj\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-kube-api-access-7q5bj\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.094578 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.094632 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.133009 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.162600 4782 scope.go:117] "RemoveContainer" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" Feb 02 11:29:54 crc kubenswrapper[4782]: E0202 11:29:54.167989 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": container with ID starting with d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6 not found: ID does not exist" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.168256 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6"} err="failed to get container status \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": rpc error: code = NotFound desc = could not find container \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": container with ID starting with d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6 not found: ID does not exist" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.168368 4782 scope.go:117] "RemoveContainer" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" Feb 02 11:29:54 crc kubenswrapper[4782]: E0202 11:29:54.176165 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": container with ID starting with 4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc not found: ID does not exist" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.176215 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc"} err="failed to get container status \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": rpc error: code = NotFound desc = could not find container \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": container with ID starting with 4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc not found: ID does not exist" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.176246 4782 scope.go:117] "RemoveContainer" containerID="d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.179813 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6"} err="failed to get container status \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": rpc error: code = NotFound desc = could not find container \"d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6\": container with ID starting with d511a7f70db8f35133929fded8c48a51eab8de96387affcdaf08d35b0a3391a6 not found: ID does not exist" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.179863 4782 scope.go:117] "RemoveContainer" containerID="4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.183892 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc"} err="failed to get container status \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": rpc error: code = NotFound desc = could not find container \"4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc\": container with ID starting with 4e63aad24affc88dba64bdd04e7b1a6cf83bc13f9a1d9ce23eabbb86967b08dc not found: ID does not exist" Feb 02 11:29:54 crc kubenswrapper[4782]: W0202 11:29:54.197200 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc86717_3e71_440c_a8f4_9cd4480e46d2.slice/crio-e6b5e69590bbaedcae00f638b682f4823b35b2df65d41d2b5f828ed04b36f52d WatchSource:0}: Error finding container e6b5e69590bbaedcae00f638b682f4823b35b2df65d41d2b5f828ed04b36f52d: Status 404 returned error can't find the container with id e6b5e69590bbaedcae00f638b682f4823b35b2df65d41d2b5f828ed04b36f52d Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198122 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198175 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198290 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198383 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198411 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198436 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198466 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.198526 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q5bj\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-kube-api-access-7q5bj\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.199208 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.206256 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.207471 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.213232 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.213985 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.214032 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.214728 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.228000 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.304224 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q5bj\" (UniqueName: \"kubernetes.io/projected/6c11a274-b189-4a4e-9a21-1c1d8fcc7f13-kube-api-access-7q5bj\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.350444 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13\") " pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.398206 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.854462 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc11dda-830a-4b93-b670-e3fabc7b9c28" path="/var/lib/kubelet/pods/8dc11dda-830a-4b93-b670-e3fabc7b9c28/volumes" Feb 02 11:29:54 crc kubenswrapper[4782]: I0202 11:29:54.941274 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdc86717-3e71-440c-a8f4-9cd4480e46d2","Type":"ContainerStarted","Data":"e6b5e69590bbaedcae00f638b682f4823b35b2df65d41d2b5f828ed04b36f52d"} Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.318579 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.357365 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.398597 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="5d7df751-5d4d-4ce4-83c9-70abd18fc7c7" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 11:29:55 crc kubenswrapper[4782]: W0202 11:29:55.420303 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c11a274_b189_4a4e_9a21_1c1d8fcc7f13.slice/crio-21e6d84cf8f55546a3150c88e51c96ae7761ed82de31a3e91ce5a6cefc478432 WatchSource:0}: Error finding container 21e6d84cf8f55546a3150c88e51c96ae7761ed82de31a3e91ce5a6cefc478432: Status 404 returned error can't find the container with id 21e6d84cf8f55546a3150c88e51c96ae7761ed82de31a3e91ce5a6cefc478432 Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.663902 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.665328 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.668019 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-tzzmn" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.668057 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.706997 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.761220 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.761321 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.761358 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.761430 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.866215 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.866753 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.866789 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.866822 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.873267 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.880082 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.887726 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.916708 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") pod \"manila-db-sync-p6nkb\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.956530 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdc86717-3e71-440c-a8f4-9cd4480e46d2","Type":"ContainerStarted","Data":"faa9ae2ad06f729c0fe48d5b0bbae14723b4ed65fa8e90818ea17da240b26437"} Feb 02 11:29:55 crc kubenswrapper[4782]: I0202 11:29:55.961161 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13","Type":"ContainerStarted","Data":"21e6d84cf8f55546a3150c88e51c96ae7761ed82de31a3e91ce5a6cefc478432"} Feb 02 11:29:56 crc kubenswrapper[4782]: I0202 11:29:56.014030 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p6nkb" Feb 02 11:29:56 crc kubenswrapper[4782]: I0202 11:29:56.863656 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:29:56 crc kubenswrapper[4782]: I0202 11:29:56.976189 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13","Type":"ContainerStarted","Data":"be7673cfa97f9a174c60978e9bca29709d4620bcb9d36df6d57fadfa99e03fad"} Feb 02 11:29:56 crc kubenswrapper[4782]: I0202 11:29:56.985326 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p6nkb" event={"ID":"f45fc51f-4efe-4cbf-9539-d858ac3c2e73","Type":"ContainerStarted","Data":"3bc352515a4e7faf8ad0720cf509467a309a1738f43d6a8ab5058f36de1f92ac"} Feb 02 11:29:58 crc kubenswrapper[4782]: I0202 11:29:57.998273 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdc86717-3e71-440c-a8f4-9cd4480e46d2","Type":"ContainerStarted","Data":"7dcfb599143095b271280ff259a627804e6af6bfdc17d343edf8986854ff4661"} Feb 02 11:29:58 crc kubenswrapper[4782]: I0202 11:29:58.001257 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c11a274-b189-4a4e-9a21-1c1d8fcc7f13","Type":"ContainerStarted","Data":"a38b6d76c352d48fa771945ab2479e55d78e15bb0853d036d00c722e84363632"} Feb 02 11:29:58 crc kubenswrapper[4782]: I0202 11:29:58.021518 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.021498914 podStartE2EDuration="6.021498914s" podCreationTimestamp="2026-02-02 11:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:29:58.020053723 +0000 UTC m=+3077.904246469" watchObservedRunningTime="2026-02-02 11:29:58.021498914 +0000 UTC m=+3077.905691650" Feb 02 11:29:58 crc kubenswrapper[4782]: I0202 11:29:58.116119 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.115746622 podStartE2EDuration="4.115746622s" podCreationTimestamp="2026-02-02 11:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:29:58.058266931 +0000 UTC m=+3077.942459667" watchObservedRunningTime="2026-02-02 11:29:58.115746622 +0000 UTC m=+3077.999939338" Feb 02 11:29:59 crc kubenswrapper[4782]: I0202 11:29:59.839165 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.151286 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7"] Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.152589 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.168706 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.168967 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.189411 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7"] Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.313567 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.313742 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.313927 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.416154 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.416340 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.416425 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.417698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.426391 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.442838 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") pod \"collect-profiles-29500530-l7kd7\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:00 crc kubenswrapper[4782]: I0202 11:30:00.510804 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:03 crc kubenswrapper[4782]: I0202 11:30:03.077895 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 11:30:03 crc kubenswrapper[4782]: I0202 11:30:03.078513 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 11:30:03 crc kubenswrapper[4782]: I0202 11:30:03.115061 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 11:30:03 crc kubenswrapper[4782]: I0202 11:30:03.123526 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.070570 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.070701 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.398822 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.399869 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.435023 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:04 crc kubenswrapper[4782]: I0202 11:30:04.469550 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:05 crc kubenswrapper[4782]: I0202 11:30:05.081872 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:05 crc kubenswrapper[4782]: I0202 11:30:05.081912 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:07 crc kubenswrapper[4782]: I0202 11:30:07.109452 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 11:30:07 crc kubenswrapper[4782]: I0202 11:30:07.109949 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 11:30:08 crc kubenswrapper[4782]: I0202 11:30:08.983686 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:08 crc kubenswrapper[4782]: I0202 11:30:08.984227 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 11:30:09 crc kubenswrapper[4782]: I0202 11:30:09.004010 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 11:30:09 crc kubenswrapper[4782]: I0202 11:30:09.004158 4782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 11:30:09 crc kubenswrapper[4782]: I0202 11:30:09.061532 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 11:30:09 crc kubenswrapper[4782]: I0202 11:30:09.102394 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 11:30:10 crc kubenswrapper[4782]: E0202 11:30:10.953543 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-manila-api:current-podified" Feb 02 11:30:10 crc kubenswrapper[4782]: E0202 11:30:10.954160 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manila-db-sync,Image:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,Command:[/bin/bash],Args:[-c sleep 0 && /usr/bin/manila-manage --config-dir /etc/manila/manila.conf.d db sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:job-config-data,ReadOnly:true,MountPath:/etc/manila/manila.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4s4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42429,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42429,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-db-sync-p6nkb_openstack(f45fc51f-4efe-4cbf-9539-d858ac3c2e73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:30:10 crc kubenswrapper[4782]: E0202 11:30:10.955322 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/manila-db-sync-p6nkb" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" Feb 02 11:30:11 crc kubenswrapper[4782]: E0202 11:30:11.179325 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-manila-api:current-podified\\\"\"" pod="openstack/manila-db-sync-p6nkb" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" Feb 02 11:30:11 crc kubenswrapper[4782]: I0202 11:30:11.934322 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7"] Feb 02 11:30:11 crc kubenswrapper[4782]: W0202 11:30:11.938794 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44a78b5b_c712_4b4a_a035_652aea7086d0.slice/crio-a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b WatchSource:0}: Error finding container a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b: Status 404 returned error can't find the container with id a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.197868 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" event={"ID":"44a78b5b-c712-4b4a-a035-652aea7086d0","Type":"ContainerStarted","Data":"a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.201092 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerStarted","Data":"c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.201231 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerStarted","Data":"618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.201564 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5545895985-nbz88" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon-log" containerID="cri-o://618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b" gracePeriod=30 Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.201700 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5545895985-nbz88" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon" containerID="cri-o://c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d" gracePeriod=30 Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.222278 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerStarted","Data":"d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.222331 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerStarted","Data":"d7e359bc78356df469d48e5750e96b222300fd8fead2a75722bdc9db69969013"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.240247 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerStarted","Data":"9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.240289 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerStarted","Data":"c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.242117 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cc68dfb67-6l9rd" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon-log" containerID="cri-o://c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5" gracePeriod=30 Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.242705 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cc68dfb67-6l9rd" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon" containerID="cri-o://9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141" gracePeriod=30 Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.245947 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5545895985-nbz88" podStartSLOduration=3.543111719 podStartE2EDuration="27.245923283s" podCreationTimestamp="2026-02-02 11:29:45 +0000 UTC" firstStartedPulling="2026-02-02 11:29:47.306374155 +0000 UTC m=+3067.190566871" lastFinishedPulling="2026-02-02 11:30:11.009185729 +0000 UTC m=+3090.893378435" observedRunningTime="2026-02-02 11:30:12.231505279 +0000 UTC m=+3092.115698005" watchObservedRunningTime="2026-02-02 11:30:12.245923283 +0000 UTC m=+3092.130116009" Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.267855 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerStarted","Data":"09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.267907 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerStarted","Data":"d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770"} Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.289014 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5665456548-9x6qh" podStartSLOduration=3.362094385 podStartE2EDuration="24.288998271s" podCreationTimestamp="2026-02-02 11:29:48 +0000 UTC" firstStartedPulling="2026-02-02 11:29:50.082720325 +0000 UTC m=+3069.966913031" lastFinishedPulling="2026-02-02 11:30:11.009624201 +0000 UTC m=+3090.893816917" observedRunningTime="2026-02-02 11:30:12.27054226 +0000 UTC m=+3092.154734976" watchObservedRunningTime="2026-02-02 11:30:12.288998271 +0000 UTC m=+3092.173190987" Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.309271 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78d997b864-7sqws" podStartSLOduration=3.407990964 podStartE2EDuration="24.309252893s" podCreationTimestamp="2026-02-02 11:29:48 +0000 UTC" firstStartedPulling="2026-02-02 11:29:50.10792497 +0000 UTC m=+3069.992117696" lastFinishedPulling="2026-02-02 11:30:11.009186909 +0000 UTC m=+3090.893379625" observedRunningTime="2026-02-02 11:30:12.296700012 +0000 UTC m=+3092.180892728" watchObservedRunningTime="2026-02-02 11:30:12.309252893 +0000 UTC m=+3092.193445599" Feb 02 11:30:12 crc kubenswrapper[4782]: I0202 11:30:12.328990 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5cc68dfb67-6l9rd" podStartSLOduration=3.380136326 podStartE2EDuration="27.328972979s" podCreationTimestamp="2026-02-02 11:29:45 +0000 UTC" firstStartedPulling="2026-02-02 11:29:47.060454349 +0000 UTC m=+3066.944647065" lastFinishedPulling="2026-02-02 11:30:11.009291002 +0000 UTC m=+3090.893483718" observedRunningTime="2026-02-02 11:30:12.3195916 +0000 UTC m=+3092.203784316" watchObservedRunningTime="2026-02-02 11:30:12.328972979 +0000 UTC m=+3092.213165695" Feb 02 11:30:13 crc kubenswrapper[4782]: I0202 11:30:13.278698 4782 generic.go:334] "Generic (PLEG): container finished" podID="44a78b5b-c712-4b4a-a035-652aea7086d0" containerID="ef490dddc809af1eb03364d55c17259d24e92c43f23b3151e110c1d9aa74e0a4" exitCode=0 Feb 02 11:30:13 crc kubenswrapper[4782]: I0202 11:30:13.279187 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" event={"ID":"44a78b5b-c712-4b4a-a035-652aea7086d0","Type":"ContainerDied","Data":"ef490dddc809af1eb03364d55c17259d24e92c43f23b3151e110c1d9aa74e0a4"} Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.763884 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.861895 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") pod \"44a78b5b-c712-4b4a-a035-652aea7086d0\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.861983 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") pod \"44a78b5b-c712-4b4a-a035-652aea7086d0\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.862133 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") pod \"44a78b5b-c712-4b4a-a035-652aea7086d0\" (UID: \"44a78b5b-c712-4b4a-a035-652aea7086d0\") " Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.863981 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "44a78b5b-c712-4b4a-a035-652aea7086d0" (UID: "44a78b5b-c712-4b4a-a035-652aea7086d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.887884 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9" (OuterVolumeSpecName: "kube-api-access-qgqc9") pod "44a78b5b-c712-4b4a-a035-652aea7086d0" (UID: "44a78b5b-c712-4b4a-a035-652aea7086d0"). InnerVolumeSpecName "kube-api-access-qgqc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.888722 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44a78b5b-c712-4b4a-a035-652aea7086d0" (UID: "44a78b5b-c712-4b4a-a035-652aea7086d0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.964694 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a78b5b-c712-4b4a-a035-652aea7086d0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.964747 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a78b5b-c712-4b4a-a035-652aea7086d0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:14 crc kubenswrapper[4782]: I0202 11:30:14.964762 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgqc9\" (UniqueName: \"kubernetes.io/projected/44a78b5b-c712-4b4a-a035-652aea7086d0-kube-api-access-qgqc9\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.299982 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" event={"ID":"44a78b5b-c712-4b4a-a035-652aea7086d0","Type":"ContainerDied","Data":"a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b"} Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.300030 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a912ad1ef8dfddd09a0ade3d9ef9c3c34a666e6fdb39525d87f08b4c267b434b" Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.300036 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500530-l7kd7" Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.881070 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 11:30:15 crc kubenswrapper[4782]: I0202 11:30:15.894000 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500485-l8mbc"] Feb 02 11:30:16 crc kubenswrapper[4782]: I0202 11:30:16.001802 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:30:16 crc kubenswrapper[4782]: I0202 11:30:16.017722 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:30:16 crc kubenswrapper[4782]: I0202 11:30:16.838147 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa" path="/var/lib/kubelet/pods/6fd9b99a-c3f7-4153-b2ac-769ca0ba88aa/volumes" Feb 02 11:30:18 crc kubenswrapper[4782]: I0202 11:30:18.626924 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:30:18 crc kubenswrapper[4782]: I0202 11:30:18.628325 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:30:18 crc kubenswrapper[4782]: I0202 11:30:18.945466 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:30:18 crc kubenswrapper[4782]: I0202 11:30:18.945524 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:30:22 crc kubenswrapper[4782]: I0202 11:30:22.951124 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:30:22 crc kubenswrapper[4782]: I0202 11:30:22.952612 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:30:24 crc kubenswrapper[4782]: I0202 11:30:24.825426 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:30:26 crc kubenswrapper[4782]: I0202 11:30:26.500818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p6nkb" event={"ID":"f45fc51f-4efe-4cbf-9539-d858ac3c2e73","Type":"ContainerStarted","Data":"c033e06590ed48930855476f355d38330fd5900d1d6d3cdf6a14188571b721f2"} Feb 02 11:30:26 crc kubenswrapper[4782]: I0202 11:30:26.527824 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-p6nkb" podStartSLOduration=3.020594818 podStartE2EDuration="31.527805033s" podCreationTimestamp="2026-02-02 11:29:55 +0000 UTC" firstStartedPulling="2026-02-02 11:29:56.885933557 +0000 UTC m=+3076.770126273" lastFinishedPulling="2026-02-02 11:30:25.393143772 +0000 UTC m=+3105.277336488" observedRunningTime="2026-02-02 11:30:26.519431953 +0000 UTC m=+3106.403624699" watchObservedRunningTime="2026-02-02 11:30:26.527805033 +0000 UTC m=+3106.411997749" Feb 02 11:30:28 crc kubenswrapper[4782]: I0202 11:30:28.629841 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:30:28 crc kubenswrapper[4782]: I0202 11:30:28.947801 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Feb 02 11:30:38 crc kubenswrapper[4782]: I0202 11:30:38.629016 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:30:38 crc kubenswrapper[4782]: I0202 11:30:38.946462 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.680186 4782 generic.go:334] "Generic (PLEG): container finished" podID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" containerID="c033e06590ed48930855476f355d38330fd5900d1d6d3cdf6a14188571b721f2" exitCode=0 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.680812 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p6nkb" event={"ID":"f45fc51f-4efe-4cbf-9539-d858ac3c2e73","Type":"ContainerDied","Data":"c033e06590ed48930855476f355d38330fd5900d1d6d3cdf6a14188571b721f2"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.699298 4782 generic.go:334] "Generic (PLEG): container finished" podID="00eb57a6-b941-443f-9b8a-644c0389b562" containerID="c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d" exitCode=137 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.699331 4782 generic.go:334] "Generic (PLEG): container finished" podID="00eb57a6-b941-443f-9b8a-644c0389b562" containerID="618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b" exitCode=137 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.699343 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerDied","Data":"c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.699395 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerDied","Data":"618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.705996 4782 generic.go:334] "Generic (PLEG): container finished" podID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerID="9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141" exitCode=137 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.706031 4782 generic.go:334] "Generic (PLEG): container finished" podID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerID="c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5" exitCode=137 Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.706065 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerDied","Data":"9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.706091 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerDied","Data":"c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5"} Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.845708 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.944942 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.945034 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.945082 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.945942 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.945992 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") pod \"db3caff6-55ef-4b9f-9d45-15fc834e5974\" (UID: \"db3caff6-55ef-4b9f-9d45-15fc834e5974\") " Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.947981 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs" (OuterVolumeSpecName: "logs") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.951149 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g" (OuterVolumeSpecName: "kube-api-access-mxs5g") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "kube-api-access-mxs5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.952794 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.974263 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts" (OuterVolumeSpecName: "scripts") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:42 crc kubenswrapper[4782]: I0202 11:30:42.978787 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data" (OuterVolumeSpecName: "config-data") pod "db3caff6-55ef-4b9f-9d45-15fc834e5974" (UID: "db3caff6-55ef-4b9f-9d45-15fc834e5974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047335 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxs5g\" (UniqueName: \"kubernetes.io/projected/db3caff6-55ef-4b9f-9d45-15fc834e5974-kube-api-access-mxs5g\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047373 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db3caff6-55ef-4b9f-9d45-15fc834e5974-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047385 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047396 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db3caff6-55ef-4b9f-9d45-15fc834e5974-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.047405 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/db3caff6-55ef-4b9f-9d45-15fc834e5974-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.126962 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249383 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249482 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249541 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249585 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.249703 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") pod \"00eb57a6-b941-443f-9b8a-644c0389b562\" (UID: \"00eb57a6-b941-443f-9b8a-644c0389b562\") " Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.259957 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs" (OuterVolumeSpecName: "logs") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.260364 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf" (OuterVolumeSpecName: "kube-api-access-gsbxf") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "kube-api-access-gsbxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.260705 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.278604 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts" (OuterVolumeSpecName: "scripts") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.294569 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data" (OuterVolumeSpecName: "config-data") pod "00eb57a6-b941-443f-9b8a-644c0389b562" (UID: "00eb57a6-b941-443f-9b8a-644c0389b562"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351610 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351697 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsbxf\" (UniqueName: \"kubernetes.io/projected/00eb57a6-b941-443f-9b8a-644c0389b562-kube-api-access-gsbxf\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351713 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00eb57a6-b941-443f-9b8a-644c0389b562-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351725 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00eb57a6-b941-443f-9b8a-644c0389b562-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.351735 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00eb57a6-b941-443f-9b8a-644c0389b562-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.718446 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5545895985-nbz88" event={"ID":"00eb57a6-b941-443f-9b8a-644c0389b562","Type":"ContainerDied","Data":"e9100eed16cfde1a50208bd22824d38ac7e93d47d356dc2ebbe784e45bca71cd"} Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.718502 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5545895985-nbz88" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.718512 4782 scope.go:117] "RemoveContainer" containerID="c6c330e3edacbb0c6580054ae3c4de6722a7bdd108981d1f47eb37aef9ebad0d" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.726284 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc68dfb67-6l9rd" event={"ID":"db3caff6-55ef-4b9f-9d45-15fc834e5974","Type":"ContainerDied","Data":"78a7fb858e48a2d7c1668bcef174fb8172e784df949e22963608e184d25f8fba"} Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.726684 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc68dfb67-6l9rd" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.769601 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.793118 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5545895985-nbz88"] Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.802538 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.811915 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cc68dfb67-6l9rd"] Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.933095 4782 scope.go:117] "RemoveContainer" containerID="618faf4ceb8379e0f7bff9b59632f78463103656504f1a973f8dfd513683614b" Feb 02 11:30:43 crc kubenswrapper[4782]: I0202 11:30:43.960523 4782 scope.go:117] "RemoveContainer" containerID="9f2bf757bc39fc655216c4242c0a40a0327977b393c1fa025b532ded72bb1141" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.159750 4782 scope.go:117] "RemoveContainer" containerID="c05a499f13ecee94245644953871b44805a3e34c1de64b64494b85574ed777d5" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.302958 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p6nkb" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.381194 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") pod \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.381268 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") pod \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.381304 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") pod \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.381359 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") pod \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\" (UID: \"f45fc51f-4efe-4cbf-9539-d858ac3c2e73\") " Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.387256 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n" (OuterVolumeSpecName: "kube-api-access-w4s4n") pod "f45fc51f-4efe-4cbf-9539-d858ac3c2e73" (UID: "f45fc51f-4efe-4cbf-9539-d858ac3c2e73"). InnerVolumeSpecName "kube-api-access-w4s4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.408831 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "f45fc51f-4efe-4cbf-9539-d858ac3c2e73" (UID: "f45fc51f-4efe-4cbf-9539-d858ac3c2e73"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.409137 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data" (OuterVolumeSpecName: "config-data") pod "f45fc51f-4efe-4cbf-9539-d858ac3c2e73" (UID: "f45fc51f-4efe-4cbf-9539-d858ac3c2e73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.416700 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f45fc51f-4efe-4cbf-9539-d858ac3c2e73" (UID: "f45fc51f-4efe-4cbf-9539-d858ac3c2e73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.483800 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.484079 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.484158 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4s4n\" (UniqueName: \"kubernetes.io/projected/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-kube-api-access-w4s4n\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.484225 4782 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f45fc51f-4efe-4cbf-9539-d858ac3c2e73-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.759696 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-p6nkb" event={"ID":"f45fc51f-4efe-4cbf-9539-d858ac3c2e73","Type":"ContainerDied","Data":"3bc352515a4e7faf8ad0720cf509467a309a1738f43d6a8ab5058f36de1f92ac"} Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.759718 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-p6nkb" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.759734 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc352515a4e7faf8ad0720cf509467a309a1738f43d6a8ab5058f36de1f92ac" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.832175 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" path="/var/lib/kubelet/pods/00eb57a6-b941-443f-9b8a-644c0389b562/volumes" Feb 02 11:30:44 crc kubenswrapper[4782]: I0202 11:30:44.833495 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" path="/var/lib/kubelet/pods/db3caff6-55ef-4b9f-9d45-15fc834e5974/volumes" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.077429 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.077942 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.077965 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.077980 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a78b5b-c712-4b4a-a035-652aea7086d0" containerName="collect-profiles" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.077986 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a78b5b-c712-4b4a-a035-652aea7086d0" containerName="collect-profiles" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.078000 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" containerName="manila-db-sync" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078006 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" containerName="manila-db-sync" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.078026 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078033 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.078053 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078059 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: E0202 11:30:45.078082 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078089 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078281 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" containerName="manila-db-sync" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078299 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078316 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078330 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="00eb57a6-b941-443f-9b8a-644c0389b562" containerName="horizon-log" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078342 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a78b5b-c712-4b4a-a035-652aea7086d0" containerName="collect-profiles" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.078351 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3caff6-55ef-4b9f-9d45-15fc834e5974" containerName="horizon" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.079603 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.086406 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-tzzmn" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.088121 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.088340 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.096508 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.097186 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.109139 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.113056 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.201869 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.201921 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.201945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.201982 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202014 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202040 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202061 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202096 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202121 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202181 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202208 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202233 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202306 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.202340 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.207729 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.268425 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.303931 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304116 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304137 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304182 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304210 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304235 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304254 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304289 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304311 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304390 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304416 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304442 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.304523 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.305695 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.316759 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.316927 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.316963 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.328271 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.349132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.349252 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.363765 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.374698 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.374906 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.375278 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.375873 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") pod \"manila-scheduler-0\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.378958 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.379454 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d98f8586f-f76zz"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.390882 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") pod \"manila-share-share1-0\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.393222 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.408511 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d98f8586f-f76zz"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.416733 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.442389 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.520668 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.520849 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-dns-svc\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.521007 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9679h\" (UniqueName: \"kubernetes.io/projected/cfe77ae5-55f0-440b-b0af-ef3eb1637800-kube-api-access-9679h\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.521087 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-config\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.521945 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-sb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.522045 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-nb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.593222 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.597598 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.604383 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.629178 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.629308 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-dns-svc\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.630264 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9679h\" (UniqueName: \"kubernetes.io/projected/cfe77ae5-55f0-440b-b0af-ef3eb1637800-kube-api-access-9679h\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.630383 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-config\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.630415 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-sb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.630461 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-nb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.633048 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.633692 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-dns-svc\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.634307 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-config\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.634856 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-nb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.635069 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfe77ae5-55f0-440b-b0af-ef3eb1637800-ovsdbserver-sb\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.642671 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.671020 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9679h\" (UniqueName: \"kubernetes.io/projected/cfe77ae5-55f0-440b-b0af-ef3eb1637800-kube-api-access-9679h\") pod \"dnsmasq-dns-7d98f8586f-f76zz\" (UID: \"cfe77ae5-55f0-440b-b0af-ef3eb1637800\") " pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735141 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735540 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735665 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735703 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735744 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735825 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.735867 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.834729 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837794 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837911 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837931 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.837959 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.838007 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.838034 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.838968 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.841302 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.850177 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.850328 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.850592 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.883688 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.899805 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") pod \"manila-api-0\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " pod="openstack/manila-api-0" Feb 02 11:30:45 crc kubenswrapper[4782]: I0202 11:30:45.936859 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.543333 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.602150 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:30:46 crc kubenswrapper[4782]: W0202 11:30:46.634750 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe03de2e_2ddc_4cb1_b5be_7adb4add6582.slice/crio-08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d WatchSource:0}: Error finding container 08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d: Status 404 returned error can't find the container with id 08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.733391 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d98f8586f-f76zz"] Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.881497 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerStarted","Data":"8540ed31a6d2b8e3e589043ccf8a1a2071b1ba7d96df1fa53995124ff3fbc8af"} Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.892128 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerStarted","Data":"08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d"} Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.901364 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" event={"ID":"cfe77ae5-55f0-440b-b0af-ef3eb1637800","Type":"ContainerStarted","Data":"cda8335e7393aac6701704b8aab1a930c572e0290b52a6bcda437cd1fbdaae4a"} Feb 02 11:30:46 crc kubenswrapper[4782]: I0202 11:30:46.922073 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:46 crc kubenswrapper[4782]: W0202 11:30:46.929562 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc207707b_d720_4bfd_b93a_23ff4bc42674.slice/crio-326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83 WatchSource:0}: Error finding container 326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83: Status 404 returned error can't find the container with id 326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83 Feb 02 11:30:47 crc kubenswrapper[4782]: I0202 11:30:47.976834 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerStarted","Data":"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23"} Feb 02 11:30:47 crc kubenswrapper[4782]: I0202 11:30:47.977510 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerStarted","Data":"326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83"} Feb 02 11:30:47 crc kubenswrapper[4782]: I0202 11:30:47.980623 4782 generic.go:334] "Generic (PLEG): container finished" podID="cfe77ae5-55f0-440b-b0af-ef3eb1637800" containerID="8db8ae3957cadf7a36e98fb5d63df37effcb1da01ece45c4f98976f5289eccef" exitCode=0 Feb 02 11:30:47 crc kubenswrapper[4782]: I0202 11:30:47.980665 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" event={"ID":"cfe77ae5-55f0-440b-b0af-ef3eb1637800","Type":"ContainerDied","Data":"8db8ae3957cadf7a36e98fb5d63df37effcb1da01ece45c4f98976f5289eccef"} Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.029719 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.036893 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerStarted","Data":"b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03"} Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.051225 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" event={"ID":"cfe77ae5-55f0-440b-b0af-ef3eb1637800","Type":"ContainerStarted","Data":"443068a12d7a85105538997e50775a3f8cfa1163a74495460639983cb262a4d9"} Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.051515 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.064171 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerStarted","Data":"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a"} Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.064694 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.088442 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" podStartSLOduration=4.08841601 podStartE2EDuration="4.08841601s" podCreationTimestamp="2026-02-02 11:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:30:49.083183159 +0000 UTC m=+3128.967375875" watchObservedRunningTime="2026-02-02 11:30:49.08841601 +0000 UTC m=+3128.972608726" Feb 02 11:30:49 crc kubenswrapper[4782]: I0202 11:30:49.127885 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.127863423 podStartE2EDuration="4.127863423s" podCreationTimestamp="2026-02-02 11:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:30:49.1141965 +0000 UTC m=+3128.998389226" watchObservedRunningTime="2026-02-02 11:30:49.127863423 +0000 UTC m=+3129.012056139" Feb 02 11:30:50 crc kubenswrapper[4782]: I0202 11:30:50.086527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerStarted","Data":"cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43"} Feb 02 11:30:50 crc kubenswrapper[4782]: I0202 11:30:50.086723 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api-log" containerID="cri-o://ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" gracePeriod=30 Feb 02 11:30:50 crc kubenswrapper[4782]: I0202 11:30:50.086795 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api" containerID="cri-o://89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" gracePeriod=30 Feb 02 11:30:50 crc kubenswrapper[4782]: I0202 11:30:50.118093 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.073222382 podStartE2EDuration="5.118076093s" podCreationTimestamp="2026-02-02 11:30:45 +0000 UTC" firstStartedPulling="2026-02-02 11:30:46.557591253 +0000 UTC m=+3126.441783969" lastFinishedPulling="2026-02-02 11:30:47.602444964 +0000 UTC m=+3127.486637680" observedRunningTime="2026-02-02 11:30:50.117135176 +0000 UTC m=+3130.001327902" watchObservedRunningTime="2026-02-02 11:30:50.118076093 +0000 UTC m=+3130.002268809" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.024902 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.111657 4782 generic.go:334] "Generic (PLEG): container finished" podID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" exitCode=0 Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.111693 4782 generic.go:334] "Generic (PLEG): container finished" podID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" exitCode=143 Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112558 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112577 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerDied","Data":"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a"} Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112665 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerDied","Data":"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23"} Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112680 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c207707b-d720-4bfd-b93a-23ff4bc42674","Type":"ContainerDied","Data":"326799e35291bc063780e962603f98d3b371489f883728485c740920bed59c83"} Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.112699 4782 scope.go:117] "RemoveContainer" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123403 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123451 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123546 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123609 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123727 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123825 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123870 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") pod \"c207707b-d720-4bfd-b93a-23ff4bc42674\" (UID: \"c207707b-d720-4bfd-b93a-23ff4bc42674\") " Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.123969 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs" (OuterVolumeSpecName: "logs") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.124018 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.124335 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c207707b-d720-4bfd-b93a-23ff4bc42674-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.124352 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c207707b-d720-4bfd-b93a-23ff4bc42674-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.133014 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.133597 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj" (OuterVolumeSpecName: "kube-api-access-2rdmj") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "kube-api-access-2rdmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.134343 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts" (OuterVolumeSpecName: "scripts") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.185166 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.207536 4782 scope.go:117] "RemoveContainer" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.225736 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.225772 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.225782 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rdmj\" (UniqueName: \"kubernetes.io/projected/c207707b-d720-4bfd-b93a-23ff4bc42674-kube-api-access-2rdmj\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.225789 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.263867 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data" (OuterVolumeSpecName: "config-data") pod "c207707b-d720-4bfd-b93a-23ff4bc42674" (UID: "c207707b-d720-4bfd-b93a-23ff4bc42674"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.277494 4782 scope.go:117] "RemoveContainer" containerID="dca388f48a923df889d89ab8317f39bb415b2f6f2849925bcccbaf4b1c7171f9" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.328186 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c207707b-d720-4bfd-b93a-23ff4bc42674-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.330120 4782 scope.go:117] "RemoveContainer" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" Feb 02 11:30:51 crc kubenswrapper[4782]: E0202 11:30:51.330778 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": container with ID starting with 89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a not found: ID does not exist" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.330828 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a"} err="failed to get container status \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": rpc error: code = NotFound desc = could not find container \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": container with ID starting with 89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a not found: ID does not exist" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.330855 4782 scope.go:117] "RemoveContainer" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" Feb 02 11:30:51 crc kubenswrapper[4782]: E0202 11:30:51.332762 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": container with ID starting with ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23 not found: ID does not exist" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.332790 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23"} err="failed to get container status \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": rpc error: code = NotFound desc = could not find container \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": container with ID starting with ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23 not found: ID does not exist" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.333026 4782 scope.go:117] "RemoveContainer" containerID="89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.333664 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a"} err="failed to get container status \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": rpc error: code = NotFound desc = could not find container \"89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a\": container with ID starting with 89964867ac143818d82426b304310784c3cf634913d79f6a3e159ee52ebae07a not found: ID does not exist" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.333682 4782 scope.go:117] "RemoveContainer" containerID="ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.334361 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23"} err="failed to get container status \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": rpc error: code = NotFound desc = could not find container \"ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23\": container with ID starting with ed9c26e1aec87aa8cf115ac613654c415ebfba121858fd47e2f814af26446e23 not found: ID does not exist" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.469554 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.491411 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.510800 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:51 crc kubenswrapper[4782]: E0202 11:30:51.511307 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.511323 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api" Feb 02 11:30:51 crc kubenswrapper[4782]: E0202 11:30:51.511354 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api-log" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.511361 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api-log" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.511569 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.511592 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" containerName="manila-api-log" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.512586 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.517945 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.518147 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.541064 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.556243 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638163 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbslk\" (UniqueName: \"kubernetes.io/projected/2af78116-7ef2-4447-b552-7b0d2eaedf90-kube-api-access-lbslk\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638248 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data-custom\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638374 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638414 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-scripts\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638476 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2af78116-7ef2-4447-b552-7b0d2eaedf90-etc-machine-id\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638501 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af78116-7ef2-4447-b552-7b0d2eaedf90-logs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638549 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-public-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638575 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.638626 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740261 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740616 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-scripts\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740690 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2af78116-7ef2-4447-b552-7b0d2eaedf90-etc-machine-id\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740714 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af78116-7ef2-4447-b552-7b0d2eaedf90-logs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740747 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-public-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740770 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740815 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740842 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbslk\" (UniqueName: \"kubernetes.io/projected/2af78116-7ef2-4447-b552-7b0d2eaedf90-kube-api-access-lbslk\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.740884 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data-custom\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.741295 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2af78116-7ef2-4447-b552-7b0d2eaedf90-etc-machine-id\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.741998 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2af78116-7ef2-4447-b552-7b0d2eaedf90-logs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.748852 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.749351 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.750121 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-scripts\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.750264 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-internal-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.757417 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-public-tls-certs\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.758062 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2af78116-7ef2-4447-b552-7b0d2eaedf90-config-data-custom\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.764328 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbslk\" (UniqueName: \"kubernetes.io/projected/2af78116-7ef2-4447-b552-7b0d2eaedf90-kube-api-access-lbslk\") pod \"manila-api-0\" (UID: \"2af78116-7ef2-4447-b552-7b0d2eaedf90\") " pod="openstack/manila-api-0" Feb 02 11:30:51 crc kubenswrapper[4782]: I0202 11:30:51.857545 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.763885 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 11:30:52 crc kubenswrapper[4782]: W0202 11:30:52.783753 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2af78116_7ef2_4447_b552_7b0d2eaedf90.slice/crio-aa7cd4004f278864ceac2f47eb1a7e98655a2ef01dea180beba2fd51f325c5e9 WatchSource:0}: Error finding container aa7cd4004f278864ceac2f47eb1a7e98655a2ef01dea180beba2fd51f325c5e9: Status 404 returned error can't find the container with id aa7cd4004f278864ceac2f47eb1a7e98655a2ef01dea180beba2fd51f325c5e9 Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.940359 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c207707b-d720-4bfd-b93a-23ff4bc42674" path="/var/lib/kubelet/pods/c207707b-d720-4bfd-b93a-23ff4bc42674/volumes" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.953231 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.953305 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.957079 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.958902 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:30:52 crc kubenswrapper[4782]: I0202 11:30:52.958970 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" gracePeriod=600 Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.186001 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" exitCode=0 Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.186222 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2"} Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.186373 4782 scope.go:117] "RemoveContainer" containerID="6f3d837b63dfbe34932b87b521d0696398b6ad3538c5af0b35f7849a712f00d7" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.188540 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2af78116-7ef2-4447-b552-7b0d2eaedf90","Type":"ContainerStarted","Data":"aa7cd4004f278864ceac2f47eb1a7e98655a2ef01dea180beba2fd51f325c5e9"} Feb 02 11:30:53 crc kubenswrapper[4782]: E0202 11:30:53.292867 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.637854 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.638268 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.639145 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11"} pod="openstack/horizon-78d997b864-7sqws" containerMessage="Container horizon failed startup probe, will be restarted" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.639184 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" containerID="cri-o://09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11" gracePeriod=30 Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.957943 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.958104 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.959720 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f"} pod="openstack/horizon-5665456548-9x6qh" containerMessage="Container horizon failed startup probe, will be restarted" Feb 02 11:30:53 crc kubenswrapper[4782]: I0202 11:30:53.959780 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" containerID="cri-o://d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f" gracePeriod=30 Feb 02 11:30:54 crc kubenswrapper[4782]: I0202 11:30:54.212452 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:30:54 crc kubenswrapper[4782]: E0202 11:30:54.213276 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:30:54 crc kubenswrapper[4782]: I0202 11:30:54.222392 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2af78116-7ef2-4447-b552-7b0d2eaedf90","Type":"ContainerStarted","Data":"6b4f2c34a0ea2ad76544a901a935d8f300f1c7b3face6a0d1253d41b02debbd3"} Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.015885 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.016537 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-central-agent" containerID="cri-o://46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607" gracePeriod=30 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.016601 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="sg-core" containerID="cri-o://bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47" gracePeriod=30 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.016713 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-notification-agent" containerID="cri-o://ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc" gracePeriod=30 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.016815 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="proxy-httpd" containerID="cri-o://f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976" gracePeriod=30 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.236569 4782 generic.go:334] "Generic (PLEG): container finished" podID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerID="bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47" exitCode=2 Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.236628 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47"} Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.247016 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"2af78116-7ef2-4447-b552-7b0d2eaedf90","Type":"ContainerStarted","Data":"1811fdaf97c38e80672531dd87e0f9e75eb189569eee430c0fc51673b5a6fd78"} Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.247288 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.295549 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.295529953 podStartE2EDuration="4.295529953s" podCreationTimestamp="2026-02-02 11:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:30:55.283289302 +0000 UTC m=+3135.167482018" watchObservedRunningTime="2026-02-02 11:30:55.295529953 +0000 UTC m=+3135.179722669" Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.443339 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 11:30:55 crc kubenswrapper[4782]: I0202 11:30:55.837690 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d98f8586f-f76zz" Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:55.960745 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:55.960963 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" containerID="cri-o://699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224" gracePeriod=10 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.290888 4782 generic.go:334] "Generic (PLEG): container finished" podID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerID="699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224" exitCode=0 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.290967 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerDied","Data":"699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224"} Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.302690 4782 generic.go:334] "Generic (PLEG): container finished" podID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerID="f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976" exitCode=0 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.302716 4782 generic.go:334] "Generic (PLEG): container finished" podID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerID="ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc" exitCode=0 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.302740 4782 generic.go:334] "Generic (PLEG): container finished" podID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerID="46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607" exitCode=0 Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.303859 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976"} Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.303964 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc"} Feb 02 11:30:56 crc kubenswrapper[4782]: I0202 11:30:56.303981 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607"} Feb 02 11:30:57 crc kubenswrapper[4782]: I0202 11:30:57.204747 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.195:5353: connect: connection refused" Feb 02 11:30:58 crc kubenswrapper[4782]: I0202 11:30:58.323264 4782 generic.go:334] "Generic (PLEG): container finished" podID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerID="09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11" exitCode=0 Feb 02 11:30:58 crc kubenswrapper[4782]: I0202 11:30:58.323364 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerDied","Data":"09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11"} Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.338400 4782 generic.go:334] "Generic (PLEG): container finished" podID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerID="d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f" exitCode=0 Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.338902 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerDied","Data":"d67252276d8993c4ef2e41eb9882c821d54823662c482a91c5ee6a5d0ca0b08f"} Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.750008 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.871563 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872016 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872781 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872934 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872988 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.873023 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.873137 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.873162 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") pod \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\" (UID: \"497f3642-7f3b-417c-aa52-2ed3ddbcac75\") " Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.872520 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.876182 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.878483 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts" (OuterVolumeSpecName: "scripts") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.890710 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm" (OuterVolumeSpecName: "kube-api-access-d8ztm") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "kube-api-access-d8ztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.977711 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.978056 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/497f3642-7f3b-417c-aa52-2ed3ddbcac75-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.978070 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:30:59 crc kubenswrapper[4782]: I0202 11:30:59.978081 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ztm\" (UniqueName: \"kubernetes.io/projected/497f3642-7f3b-417c-aa52-2ed3ddbcac75-kube-api-access-d8ztm\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.055961 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.080550 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.160024 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.160057 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.177775 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.196967 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.196995 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.241060 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data" (OuterVolumeSpecName: "config-data") pod "497f3642-7f3b-417c-aa52-2ed3ddbcac75" (UID: "497f3642-7f3b-417c-aa52-2ed3ddbcac75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298048 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298106 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298134 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298170 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298204 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298331 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") pod \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\" (UID: \"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1\") " Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.298798 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497f3642-7f3b-417c-aa52-2ed3ddbcac75-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.313950 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j" (OuterVolumeSpecName: "kube-api-access-fks5j") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "kube-api-access-fks5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.363086 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" event={"ID":"2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1","Type":"ContainerDied","Data":"65ae78131f6705fae79d726446209449b899ee5d5e41b756ff8cdcf0ea494dca"} Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.363117 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79794c8ddf-x6sht" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.363760 4782 scope.go:117] "RemoveContainer" containerID="699e07b8f64810448ea2047e5e2c614ebf51ac1e603c9d37e164e29edf07c224" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.376522 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"497f3642-7f3b-417c-aa52-2ed3ddbcac75","Type":"ContainerDied","Data":"64999423f31f3794e6c490461b475ade46dbc3e6082014de1c1140111ca7c591"} Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.376574 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.377389 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.382892 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config" (OuterVolumeSpecName: "config") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.382981 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5665456548-9x6qh" event={"ID":"306e30f3-8fe7-427e-b8ff-309a561dda88","Type":"ContainerStarted","Data":"6a4b12ba3f23d6e7e4363c3be7c096d829988d83db73bb8c3d10e0efdb2f7cc6"} Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.382528 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.398478 4782 scope.go:117] "RemoveContainer" containerID="6f20530bb72c77a28a0b759dfecb8abeba2d4c4c9ec2b1e203807cb88c440c27" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.400433 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.400452 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fks5j\" (UniqueName: \"kubernetes.io/projected/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-kube-api-access-fks5j\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.400462 4782 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.400472 4782 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.404432 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerStarted","Data":"68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae"} Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.406124 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.444485 4782 scope.go:117] "RemoveContainer" containerID="f3d87444550f41bd00e5c006bf7055a00889453d07496499637cd29b4b017976" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.445585 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" (UID: "2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.497609 4782 scope.go:117] "RemoveContainer" containerID="bbba23b489e8538f8cd4964c5dafc1fdbf720f48e53f9541bcc5bed2b196da47" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.504749 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.504778 4782 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.514565 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.536873 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.545071 4782 scope.go:117] "RemoveContainer" containerID="ff86f60fa3072a6f2315da2b189baa4e07115cbc11bced2bb2789b7a5ef65ffc" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.546705 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547206 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="sg-core" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547310 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="sg-core" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547375 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="init" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547461 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="init" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547516 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-central-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547567 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-central-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547622 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-notification-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547704 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-notification-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547763 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="proxy-httpd" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547814 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="proxy-httpd" Feb 02 11:31:00 crc kubenswrapper[4782]: E0202 11:31:00.547869 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.547917 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548186 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-notification-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548256 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="proxy-httpd" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548320 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="sg-core" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548408 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" containerName="ceilometer-central-agent" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.548488 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" containerName="dnsmasq-dns" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.550865 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.556182 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.556238 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.556465 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.562612 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.609583 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.609756 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.609874 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.609979 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.610128 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.610432 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.610563 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.610967 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.715861 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.715924 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.715987 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716087 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716115 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716148 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716188 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.716213 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.717335 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.717705 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.722139 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.725956 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.742368 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.752656 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.759341 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.759457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.844407 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="497f3642-7f3b-417c-aa52-2ed3ddbcac75" path="/var/lib/kubelet/pods/497f3642-7f3b-417c-aa52-2ed3ddbcac75/volumes" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.886223 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.901826 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.907383 4782 scope.go:117] "RemoveContainer" containerID="46a450a4fb1e112f420ad3a53c0cc5db48370b4a72c1654cd86cff0553015607" Feb 02 11:31:00 crc kubenswrapper[4782]: I0202 11:31:00.916149 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79794c8ddf-x6sht"] Feb 02 11:31:01 crc kubenswrapper[4782]: I0202 11:31:01.420584 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerStarted","Data":"8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213"} Feb 02 11:31:01 crc kubenswrapper[4782]: I0202 11:31:01.516457 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:02 crc kubenswrapper[4782]: I0202 11:31:02.445587 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerStarted","Data":"211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834"} Feb 02 11:31:02 crc kubenswrapper[4782]: I0202 11:31:02.450678 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"1a51c45b57e3fe68cc34a30ee9e80c20fa6fe136d2e6a3325715aaa301b5e1bb"} Feb 02 11:31:02 crc kubenswrapper[4782]: I0202 11:31:02.477140 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.37734577 podStartE2EDuration="17.477115566s" podCreationTimestamp="2026-02-02 11:30:45 +0000 UTC" firstStartedPulling="2026-02-02 11:30:46.660043326 +0000 UTC m=+3126.544236052" lastFinishedPulling="2026-02-02 11:30:59.759813132 +0000 UTC m=+3139.644005848" observedRunningTime="2026-02-02 11:31:02.47342976 +0000 UTC m=+3142.357622476" watchObservedRunningTime="2026-02-02 11:31:02.477115566 +0000 UTC m=+3142.361308282" Feb 02 11:31:02 crc kubenswrapper[4782]: I0202 11:31:02.838819 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1" path="/var/lib/kubelet/pods/2b42d8a9-18c7-4a14-86b0-ab5fd02a39d1/volumes" Feb 02 11:31:03 crc kubenswrapper[4782]: I0202 11:31:03.485184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c"} Feb 02 11:31:03 crc kubenswrapper[4782]: I0202 11:31:03.485627 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3"} Feb 02 11:31:04 crc kubenswrapper[4782]: I0202 11:31:04.504192 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa"} Feb 02 11:31:05 crc kubenswrapper[4782]: I0202 11:31:05.418657 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 11:31:06 crc kubenswrapper[4782]: I0202 11:31:06.829796 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:06 crc kubenswrapper[4782]: E0202 11:31:06.831046 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:07 crc kubenswrapper[4782]: I0202 11:31:07.531721 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerStarted","Data":"ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307"} Feb 02 11:31:07 crc kubenswrapper[4782]: I0202 11:31:07.532025 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:31:07 crc kubenswrapper[4782]: I0202 11:31:07.560184 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.354439471 podStartE2EDuration="7.560162724s" podCreationTimestamp="2026-02-02 11:31:00 +0000 UTC" firstStartedPulling="2026-02-02 11:31:01.524366781 +0000 UTC m=+3141.408559497" lastFinishedPulling="2026-02-02 11:31:06.730090034 +0000 UTC m=+3146.614282750" observedRunningTime="2026-02-02 11:31:07.551227237 +0000 UTC m=+3147.435419953" watchObservedRunningTime="2026-02-02 11:31:07.560162724 +0000 UTC m=+3147.444355440" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.605619 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.627137 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.627188 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.648666 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.945472 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:31:08 crc kubenswrapper[4782]: I0202 11:31:08.945922 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:31:09 crc kubenswrapper[4782]: I0202 11:31:09.550193 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="manila-scheduler" containerID="cri-o://b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03" gracePeriod=30 Feb 02 11:31:09 crc kubenswrapper[4782]: I0202 11:31:09.550976 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="probe" containerID="cri-o://cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43" gracePeriod=30 Feb 02 11:31:10 crc kubenswrapper[4782]: I0202 11:31:10.561897 4782 generic.go:334] "Generic (PLEG): container finished" podID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerID="cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43" exitCode=0 Feb 02 11:31:10 crc kubenswrapper[4782]: I0202 11:31:10.561975 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerDied","Data":"cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43"} Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.618965 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.619295 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-central-agent" containerID="cri-o://08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3" gracePeriod=30 Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.619378 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-notification-agent" containerID="cri-o://a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c" gracePeriod=30 Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.619394 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="sg-core" containerID="cri-o://c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa" gracePeriod=30 Feb 02 11:31:11 crc kubenswrapper[4782]: I0202 11:31:11.619567 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="proxy-httpd" containerID="cri-o://ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307" gracePeriod=30 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.587370 4782 generic.go:334] "Generic (PLEG): container finished" podID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerID="b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03" exitCode=0 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.587764 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerDied","Data":"b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599859 4782 generic.go:334] "Generic (PLEG): container finished" podID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerID="ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307" exitCode=0 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599906 4782 generic.go:334] "Generic (PLEG): container finished" podID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerID="c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa" exitCode=2 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599918 4782 generic.go:334] "Generic (PLEG): container finished" podID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerID="a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c" exitCode=0 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599932 4782 generic.go:334] "Generic (PLEG): container finished" podID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerID="08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3" exitCode=0 Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599956 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599985 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.599999 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.600012 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3"} Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.619288 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701409 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701690 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701783 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701898 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.701945 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.702086 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.702285 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.702358 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") pod \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\" (UID: \"e675b2b1-c562-4e86-a104-9d16b83b8dc3\") " Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.705150 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.707012 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.718627 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts" (OuterVolumeSpecName: "scripts") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.718819 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj" (OuterVolumeSpecName: "kube-api-access-7z4pj") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "kube-api-access-7z4pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.856220 4782 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.856261 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z4pj\" (UniqueName: \"kubernetes.io/projected/e675b2b1-c562-4e86-a104-9d16b83b8dc3-kube-api-access-7z4pj\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.856278 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.856288 4782 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e675b2b1-c562-4e86-a104-9d16b83b8dc3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.882368 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.887497 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.958792 4782 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.959152 4782 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:12 crc kubenswrapper[4782]: I0202 11:31:12.975588 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.062027 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.110522 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data" (OuterVolumeSpecName: "config-data") pod "e675b2b1-c562-4e86-a104-9d16b83b8dc3" (UID: "e675b2b1-c562-4e86-a104-9d16b83b8dc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.163765 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e675b2b1-c562-4e86-a104-9d16b83b8dc3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.260793 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.369707 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.369789 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.369844 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.369888 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.370704 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.370860 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.370889 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") pod \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\" (UID: \"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a\") " Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.371377 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.378226 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.379923 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts" (OuterVolumeSpecName: "scripts") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.382912 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn" (OuterVolumeSpecName: "kube-api-access-6fjnn") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "kube-api-access-6fjnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.439763 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.476044 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.476082 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fjnn\" (UniqueName: \"kubernetes.io/projected/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-kube-api-access-6fjnn\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.476092 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.476100 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.500865 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data" (OuterVolumeSpecName: "config-data") pod "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" (UID: "9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.578231 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.610937 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a","Type":"ContainerDied","Data":"8540ed31a6d2b8e3e589043ccf8a1a2071b1ba7d96df1fa53995124ff3fbc8af"} Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.611301 4782 scope.go:117] "RemoveContainer" containerID="cc68c0f777fc5436c540b425a3326b6391ec6d6b6b3b5fe43f8e31bcd626fc43" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.610975 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.624832 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e675b2b1-c562-4e86-a104-9d16b83b8dc3","Type":"ContainerDied","Data":"1a51c45b57e3fe68cc34a30ee9e80c20fa6fe136d2e6a3325715aaa301b5e1bb"} Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.624883 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.667825 4782 scope.go:117] "RemoveContainer" containerID="b497c2954dd3a78c6953a3ffd64222499e19e8de5359aac6f81a3ec8c829fd03" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.675241 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.690640 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.700437 4782 scope.go:117] "RemoveContainer" containerID="ddbbc7d2a92ff63da784e8d9a314b4eca721d7467e7ab0b26dc2ddb295d90307" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.710085 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.720367 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.729725 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730223 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-notification-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730243 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-notification-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730251 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="proxy-httpd" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730259 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="proxy-httpd" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730286 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="sg-core" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730295 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="sg-core" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730303 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="probe" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730310 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="probe" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730331 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="manila-scheduler" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730339 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="manila-scheduler" Feb 02 11:31:13 crc kubenswrapper[4782]: E0202 11:31:13.730357 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-central-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730364 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-central-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730571 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="manila-scheduler" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730589 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" containerName="probe" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730604 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="sg-core" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730616 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-central-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730633 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="ceilometer-notification-agent" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.730649 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" containerName="proxy-httpd" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.731832 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.743844 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.744068 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.752742 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.755644 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.762312 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.762385 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.762313 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.763702 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.780087 4782 scope.go:117] "RemoveContainer" containerID="c25e68e145ab381717c489e5171a63c8989f000516f9f3ea8306406161e88ffa" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.869733 4782 scope.go:117] "RemoveContainer" containerID="a83eaaad5041e1c388995b790d6d930c9e79d82d0046f3781afcaf3d01e30a1c" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.884488 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e465ef3-3141-429f-927f-db1eabdff230-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.884797 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-scripts\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.889843 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.889952 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4ssc\" (UniqueName: \"kubernetes.io/projected/5cbff496-9e10-4868-ab32-849a8b238474-kube-api-access-n4ssc\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.890086 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-config-data\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.890191 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891216 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891355 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-run-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891481 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-scripts\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891587 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2qsl\" (UniqueName: \"kubernetes.io/projected/6e465ef3-3141-429f-927f-db1eabdff230-kube-api-access-m2qsl\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891689 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.891887 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.892011 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-log-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.892115 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.904049 4782 scope.go:117] "RemoveContainer" containerID="08e57690c4d909f0b5f036a03ea4a1f2836c02d9bbaf7d8489722bc25e1e66d3" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.993904 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.993974 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-log-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994000 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994040 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e465ef3-3141-429f-927f-db1eabdff230-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994082 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-scripts\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994096 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4ssc\" (UniqueName: \"kubernetes.io/projected/5cbff496-9e10-4868-ab32-849a8b238474-kube-api-access-n4ssc\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994139 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-config-data\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994156 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994202 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994238 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-run-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994270 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-scripts\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994297 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2qsl\" (UniqueName: \"kubernetes.io/projected/6e465ef3-3141-429f-927f-db1eabdff230-kube-api-access-m2qsl\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.994324 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.995114 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e465ef3-3141-429f-927f-db1eabdff230-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.997179 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-run-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:13 crc kubenswrapper[4782]: I0202 11:31:13.998336 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cbff496-9e10-4868-ab32-849a8b238474-log-httpd\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.000462 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-config-data\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.004753 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.006400 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-scripts\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.011990 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.012928 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.015327 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-scripts\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.019457 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4ssc\" (UniqueName: \"kubernetes.io/projected/5cbff496-9e10-4868-ab32-849a8b238474-kube-api-access-n4ssc\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.019814 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cbff496-9e10-4868-ab32-849a8b238474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cbff496-9e10-4868-ab32-849a8b238474\") " pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.023387 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.033460 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e465ef3-3141-429f-927f-db1eabdff230-config-data\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.053253 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2qsl\" (UniqueName: \"kubernetes.io/projected/6e465ef3-3141-429f-927f-db1eabdff230-kube-api-access-m2qsl\") pod \"manila-scheduler-0\" (UID: \"6e465ef3-3141-429f-927f-db1eabdff230\") " pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.187461 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.189250 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.530895 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.799781 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.845425 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a" path="/var/lib/kubelet/pods/9ce54744-5bb7-42ae-b7b5-ee23b1b9b03a/volumes" Feb 02 11:31:14 crc kubenswrapper[4782]: I0202 11:31:14.846252 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e675b2b1-c562-4e86-a104-9d16b83b8dc3" path="/var/lib/kubelet/pods/e675b2b1-c562-4e86-a104-9d16b83b8dc3/volumes" Feb 02 11:31:15 crc kubenswrapper[4782]: I0202 11:31:15.041475 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 11:31:15 crc kubenswrapper[4782]: W0202 11:31:15.056769 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cbff496_9e10_4868_ab32_849a8b238474.slice/crio-9389122e851d2e1b8b9f2ec00e7eaf2feadd86602edf76eb19eea315fa8a27c2 WatchSource:0}: Error finding container 9389122e851d2e1b8b9f2ec00e7eaf2feadd86602edf76eb19eea315fa8a27c2: Status 404 returned error can't find the container with id 9389122e851d2e1b8b9f2ec00e7eaf2feadd86602edf76eb19eea315fa8a27c2 Feb 02 11:31:15 crc kubenswrapper[4782]: I0202 11:31:15.669854 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6e465ef3-3141-429f-927f-db1eabdff230","Type":"ContainerStarted","Data":"6a232de3971540c0f74cae967f197fd2f2095afb9ccd175b9d5701fb39aefb84"} Feb 02 11:31:15 crc kubenswrapper[4782]: I0202 11:31:15.671795 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"9389122e851d2e1b8b9f2ec00e7eaf2feadd86602edf76eb19eea315fa8a27c2"} Feb 02 11:31:16 crc kubenswrapper[4782]: I0202 11:31:16.684338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6e465ef3-3141-429f-927f-db1eabdff230","Type":"ContainerStarted","Data":"09f759fd2b011d23b48d6b79a4ac12bafd6286e720120fbe8ce6b8ecacc447e2"} Feb 02 11:31:16 crc kubenswrapper[4782]: I0202 11:31:16.684939 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6e465ef3-3141-429f-927f-db1eabdff230","Type":"ContainerStarted","Data":"a4308919fc2017fbe6c4d0bd48ccd06e0007da63b0f58790c345fb7647ddd51b"} Feb 02 11:31:16 crc kubenswrapper[4782]: I0202 11:31:16.687198 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"9c2b46a5fb6e243e563f4736d3a8daca867f5bcb670920c92e635f0899570290"} Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.688127 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.707135 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"538429587e15dbebb413691b8fc1d30c67fd693f14506e6af4aa454f4e50ab8e"} Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.709383 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.709365503 podStartE2EDuration="4.709365503s" podCreationTimestamp="2026-02-02 11:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:31:16.714190879 +0000 UTC m=+3156.598383605" watchObservedRunningTime="2026-02-02 11:31:17.709365503 +0000 UTC m=+3157.593558219" Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.770122 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:17 crc kubenswrapper[4782]: I0202 11:31:17.821638 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:17 crc kubenswrapper[4782]: E0202 11:31:17.821962 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.629710 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.716914 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"061a8f9cee308c41e41a30c18c574462297aa061337f4afd80fc92856b2ac5d1"} Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.717121 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="manila-share" containerID="cri-o://8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213" gracePeriod=30 Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.717171 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="probe" containerID="cri-o://211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834" gracePeriod=30 Feb 02 11:31:18 crc kubenswrapper[4782]: I0202 11:31:18.947167 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Feb 02 11:31:19 crc kubenswrapper[4782]: I0202 11:31:19.737073 4782 generic.go:334] "Generic (PLEG): container finished" podID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerID="211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834" exitCode=0 Feb 02 11:31:19 crc kubenswrapper[4782]: I0202 11:31:19.737404 4782 generic.go:334] "Generic (PLEG): container finished" podID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerID="8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213" exitCode=1 Feb 02 11:31:19 crc kubenswrapper[4782]: I0202 11:31:19.737429 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerDied","Data":"211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834"} Feb 02 11:31:19 crc kubenswrapper[4782]: I0202 11:31:19.737458 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerDied","Data":"8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213"} Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.209260 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359089 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359200 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359232 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359440 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359501 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359596 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359632 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.359686 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") pod \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\" (UID: \"be03de2e-2ddc-4cb1-b5be-7adb4add6582\") " Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.361097 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.361179 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.368921 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph" (OuterVolumeSpecName: "ceph") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.369064 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.371786 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm" (OuterVolumeSpecName: "kube-api-access-mhpfm") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "kube-api-access-mhpfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.385977 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts" (OuterVolumeSpecName: "scripts") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.454897 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.461958 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.461999 4782 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462014 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462028 4782 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462038 4782 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462050 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhpfm\" (UniqueName: \"kubernetes.io/projected/be03de2e-2ddc-4cb1-b5be-7adb4add6582-kube-api-access-mhpfm\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.462064 4782 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/be03de2e-2ddc-4cb1-b5be-7adb4add6582-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.520477 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data" (OuterVolumeSpecName: "config-data") pod "be03de2e-2ddc-4cb1-b5be-7adb4add6582" (UID: "be03de2e-2ddc-4cb1-b5be-7adb4add6582"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.564292 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03de2e-2ddc-4cb1-b5be-7adb4add6582-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.768376 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"be03de2e-2ddc-4cb1-b5be-7adb4add6582","Type":"ContainerDied","Data":"08c5105f53bbbb34b5fc28061ef06193852bf85c93b32e422897b4cbfd23205d"} Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.769616 4782 scope.go:117] "RemoveContainer" containerID="211219d562c4afafdd9cf2cd2a4262805a19e42c0355d453188162d50c32a834" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.768393 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.801125 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5cbff496-9e10-4868-ab32-849a8b238474","Type":"ContainerStarted","Data":"ccbf21f42532ea2654c6007d5b2985ad8c55bd36c3e95b5de1de9290a4a613c8"} Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.801750 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.832981 4782 scope.go:117] "RemoveContainer" containerID="8a44c5cbf74f9422a24df475561c5c4c5a1cf6d5939f12d2814da14061073213" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.965074 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.3640915209999998 podStartE2EDuration="7.965057826s" podCreationTimestamp="2026-02-02 11:31:13 +0000 UTC" firstStartedPulling="2026-02-02 11:31:15.075608269 +0000 UTC m=+3154.959800995" lastFinishedPulling="2026-02-02 11:31:19.676574584 +0000 UTC m=+3159.560767300" observedRunningTime="2026-02-02 11:31:20.895160247 +0000 UTC m=+3160.779352973" watchObservedRunningTime="2026-02-02 11:31:20.965057826 +0000 UTC m=+3160.849250542" Feb 02 11:31:20 crc kubenswrapper[4782]: I0202 11:31:20.991328 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.053873 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.070715 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:21 crc kubenswrapper[4782]: E0202 11:31:21.071195 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="manila-share" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.071217 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="manila-share" Feb 02 11:31:21 crc kubenswrapper[4782]: E0202 11:31:21.071239 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="probe" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.071246 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="probe" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.071411 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="probe" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.071438 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" containerName="manila-share" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.072401 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.080211 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.082860 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.207905 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208282 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208315 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208349 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208390 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-scripts\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208561 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2ss\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-kube-api-access-nq2ss\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208597 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-ceph\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.208622 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.310642 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2ss\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-kube-api-access-nq2ss\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311041 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-ceph\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311141 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311286 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311375 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311461 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311549 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.311669 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-scripts\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.312841 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.312983 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/04aa7a3f-6353-4317-8825-1447f8a88842-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.324467 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-scripts\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.324902 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.325191 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.325306 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-ceph\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.336558 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa7a3f-6353-4317-8825-1447f8a88842-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.360244 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2ss\" (UniqueName: \"kubernetes.io/projected/04aa7a3f-6353-4317-8825-1447f8a88842-kube-api-access-nq2ss\") pod \"manila-share-share1-0\" (UID: \"04aa7a3f-6353-4317-8825-1447f8a88842\") " pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.400314 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 11:31:21 crc kubenswrapper[4782]: I0202 11:31:21.937927 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 11:31:22 crc kubenswrapper[4782]: I0202 11:31:22.845511 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be03de2e-2ddc-4cb1-b5be-7adb4add6582" path="/var/lib/kubelet/pods/be03de2e-2ddc-4cb1-b5be-7adb4add6582/volumes" Feb 02 11:31:22 crc kubenswrapper[4782]: I0202 11:31:22.854926 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"04aa7a3f-6353-4317-8825-1447f8a88842","Type":"ContainerStarted","Data":"49ae4d7efbdb39580241566cc9d6cdc3e54913589dec64371bc05d9dc27103b7"} Feb 02 11:31:22 crc kubenswrapper[4782]: I0202 11:31:22.854974 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"04aa7a3f-6353-4317-8825-1447f8a88842","Type":"ContainerStarted","Data":"dea5bebdf56912f1382c7285fb8e0288248bd9fc7aa62131e841b354c5b1e9b4"} Feb 02 11:31:23 crc kubenswrapper[4782]: I0202 11:31:23.865184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"04aa7a3f-6353-4317-8825-1447f8a88842","Type":"ContainerStarted","Data":"43415fcef60a234b1660bdd058149b4207de7a075a7e69fc616a09a4c4884368"} Feb 02 11:31:23 crc kubenswrapper[4782]: I0202 11:31:23.905135 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.9051153899999997 podStartE2EDuration="3.90511539s" podCreationTimestamp="2026-02-02 11:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:31:23.896845422 +0000 UTC m=+3163.781038138" watchObservedRunningTime="2026-02-02 11:31:23.90511539 +0000 UTC m=+3163.789308106" Feb 02 11:31:24 crc kubenswrapper[4782]: I0202 11:31:24.189455 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 11:31:28 crc kubenswrapper[4782]: I0202 11:31:28.627917 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:31:28 crc kubenswrapper[4782]: I0202 11:31:28.946732 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5665456548-9x6qh" podUID="306e30f3-8fe7-427e-b8ff-309a561dda88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Feb 02 11:31:30 crc kubenswrapper[4782]: I0202 11:31:30.827522 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:30 crc kubenswrapper[4782]: E0202 11:31:30.828015 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:31 crc kubenswrapper[4782]: I0202 11:31:31.401343 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 11:31:36 crc kubenswrapper[4782]: I0202 11:31:36.312115 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 11:31:42 crc kubenswrapper[4782]: I0202 11:31:42.082295 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:31:42 crc kubenswrapper[4782]: I0202 11:31:42.111817 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:31:43 crc kubenswrapper[4782]: I0202 11:31:43.491005 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 11:31:43 crc kubenswrapper[4782]: I0202 11:31:43.821201 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:43 crc kubenswrapper[4782]: E0202 11:31:43.821487 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:44 crc kubenswrapper[4782]: I0202 11:31:44.110779 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:31:44 crc kubenswrapper[4782]: I0202 11:31:44.119436 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5665456548-9x6qh" Feb 02 11:31:44 crc kubenswrapper[4782]: I0202 11:31:44.222338 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:31:44 crc kubenswrapper[4782]: I0202 11:31:44.271002 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 11:31:45 crc kubenswrapper[4782]: I0202 11:31:45.065098 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon-log" containerID="cri-o://d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770" gracePeriod=30 Feb 02 11:31:45 crc kubenswrapper[4782]: I0202 11:31:45.065219 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" containerID="cri-o://68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae" gracePeriod=30 Feb 02 11:31:48 crc kubenswrapper[4782]: I0202 11:31:48.639383 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:56172->10.217.0.242:8443: read: connection reset by peer" Feb 02 11:31:49 crc kubenswrapper[4782]: I0202 11:31:49.099094 4782 generic.go:334] "Generic (PLEG): container finished" podID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerID="68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae" exitCode=0 Feb 02 11:31:49 crc kubenswrapper[4782]: I0202 11:31:49.099156 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerDied","Data":"68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae"} Feb 02 11:31:49 crc kubenswrapper[4782]: I0202 11:31:49.099484 4782 scope.go:117] "RemoveContainer" containerID="09295effad802ea8438e358847ecb01f49091fb80c4d58e17763b7d006278a11" Feb 02 11:31:57 crc kubenswrapper[4782]: I0202 11:31:57.821882 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:31:57 crc kubenswrapper[4782]: E0202 11:31:57.822476 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:31:58 crc kubenswrapper[4782]: I0202 11:31:58.628342 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:32:08 crc kubenswrapper[4782]: I0202 11:32:08.628088 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d997b864-7sqws" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.242:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.242:8443: connect: connection refused" Feb 02 11:32:08 crc kubenswrapper[4782]: I0202 11:32:08.628742 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:32:10 crc kubenswrapper[4782]: I0202 11:32:10.829560 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:10 crc kubenswrapper[4782]: E0202 11:32:10.830119 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.409120 4782 generic.go:334] "Generic (PLEG): container finished" podID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerID="d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770" exitCode=137 Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.409295 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerDied","Data":"d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770"} Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.519280 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682225 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682293 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682363 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682538 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682579 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682629 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682691 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") pod \"62cd5c24-315a-45c1-bca8-08696f1080cd\" (UID: \"62cd5c24-315a-45c1-bca8-08696f1080cd\") " Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.682840 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs" (OuterVolumeSpecName: "logs") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.683106 4782 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62cd5c24-315a-45c1-bca8-08696f1080cd-logs\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.689124 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.695489 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf" (OuterVolumeSpecName: "kube-api-access-6xwsf") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "kube-api-access-6xwsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.742555 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.744283 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data" (OuterVolumeSpecName: "config-data") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.757608 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts" (OuterVolumeSpecName: "scripts") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786022 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786067 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786081 4782 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62cd5c24-315a-45c1-bca8-08696f1080cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786092 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xwsf\" (UniqueName: \"kubernetes.io/projected/62cd5c24-315a-45c1-bca8-08696f1080cd-kube-api-access-6xwsf\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.786103 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.800874 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "62cd5c24-315a-45c1-bca8-08696f1080cd" (UID: "62cd5c24-315a-45c1-bca8-08696f1080cd"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:32:15 crc kubenswrapper[4782]: I0202 11:32:15.888286 4782 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/62cd5c24-315a-45c1-bca8-08696f1080cd-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.419846 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d997b864-7sqws" event={"ID":"62cd5c24-315a-45c1-bca8-08696f1080cd","Type":"ContainerDied","Data":"d24d9db9a798247d6fbcd136dc3f9d15a710d6aee5946c313b4ac9b4fb5bc96d"} Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.419987 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d997b864-7sqws" Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.420744 4782 scope.go:117] "RemoveContainer" containerID="68394bc34f7dce9e271ce3f95971bd209e0e3a798e5433df9c66b03578b88eae" Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.459893 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.470530 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78d997b864-7sqws"] Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.611043 4782 scope.go:117] "RemoveContainer" containerID="d69e181d159dbc6c08cd056aa1bf1d0f3003f078165449a5f856df54eed28770" Feb 02 11:32:16 crc kubenswrapper[4782]: I0202 11:32:16.832320 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" path="/var/lib/kubelet/pods/62cd5c24-315a-45c1-bca8-08696f1080cd/volumes" Feb 02 11:32:24 crc kubenswrapper[4782]: I0202 11:32:24.820985 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:24 crc kubenswrapper[4782]: E0202 11:32:24.821763 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:32:36 crc kubenswrapper[4782]: I0202 11:32:36.821497 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:36 crc kubenswrapper[4782]: E0202 11:32:36.822374 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.416349 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:32:47 crc kubenswrapper[4782]: E0202 11:32:47.423155 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423194 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: E0202 11:32:47.423218 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423227 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: E0202 11:32:47.423268 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon-log" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423277 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon-log" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423531 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423548 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon-log" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.423561 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cd5c24-315a-45c1-bca8-08696f1080cd" containerName="horizon" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.424433 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.429120 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.429270 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.430079 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.431100 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvl62" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.434085 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.506320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.506716 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.506809 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609270 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609491 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609846 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.609944 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610012 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610133 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610169 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610214 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.610970 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.611855 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.622000 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.713572 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.713784 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.713900 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.714036 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.714068 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.714109 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.714172 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.715075 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.716270 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.718723 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.719291 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.730583 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.746749 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " pod="openstack/tempest-tests-tempest" Feb 02 11:32:47 crc kubenswrapper[4782]: I0202 11:32:47.821666 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:47 crc kubenswrapper[4782]: E0202 11:32:47.822242 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:32:48 crc kubenswrapper[4782]: I0202 11:32:48.048905 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:32:48 crc kubenswrapper[4782]: I0202 11:32:48.499042 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 11:32:48 crc kubenswrapper[4782]: I0202 11:32:48.692048 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a5a266a5-ac00-49e1-9443-def4cebe65ad","Type":"ContainerStarted","Data":"165e18b6fa145b8da7ea6159f1baabb2c9b6bfa2ecbd382cecfe714965ca36c1"} Feb 02 11:32:58 crc kubenswrapper[4782]: I0202 11:32:58.824052 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:32:58 crc kubenswrapper[4782]: E0202 11:32:58.825132 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:09 crc kubenswrapper[4782]: I0202 11:33:09.821796 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:33:09 crc kubenswrapper[4782]: E0202 11:33:09.823022 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:21 crc kubenswrapper[4782]: I0202 11:33:21.823163 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:33:21 crc kubenswrapper[4782]: E0202 11:33:21.825010 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:29 crc kubenswrapper[4782]: E0202 11:33:29.258919 4782 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 02 11:33:29 crc kubenswrapper[4782]: E0202 11:33:29.262708 4782 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8gp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a5a266a5-ac00-49e1-9443-def4cebe65ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 11:33:29 crc kubenswrapper[4782]: E0202 11:33:29.263984 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" Feb 02 11:33:30 crc kubenswrapper[4782]: E0202 11:33:30.137939 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" Feb 02 11:33:34 crc kubenswrapper[4782]: I0202 11:33:34.821285 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:33:34 crc kubenswrapper[4782]: E0202 11:33:34.822058 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:44 crc kubenswrapper[4782]: I0202 11:33:44.049951 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 11:33:46 crc kubenswrapper[4782]: I0202 11:33:46.274871 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a5a266a5-ac00-49e1-9443-def4cebe65ad","Type":"ContainerStarted","Data":"762b4b5b8e241a2a3f60ed6176e6adc0554b048edbcf9782fa78026e47e66f14"} Feb 02 11:33:46 crc kubenswrapper[4782]: I0202 11:33:46.297047 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.757753717 podStartE2EDuration="1m0.297026302s" podCreationTimestamp="2026-02-02 11:32:46 +0000 UTC" firstStartedPulling="2026-02-02 11:32:48.507980546 +0000 UTC m=+3248.392173252" lastFinishedPulling="2026-02-02 11:33:44.047253121 +0000 UTC m=+3303.931445837" observedRunningTime="2026-02-02 11:33:46.295617081 +0000 UTC m=+3306.179809797" watchObservedRunningTime="2026-02-02 11:33:46.297026302 +0000 UTC m=+3306.181219018" Feb 02 11:33:48 crc kubenswrapper[4782]: I0202 11:33:48.821941 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:33:48 crc kubenswrapper[4782]: E0202 11:33:48.823250 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.123905 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.126582 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.146422 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.250216 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.250330 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.250602 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.353046 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.353171 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.353243 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.353884 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.354206 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.375570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") pod \"redhat-marketplace-ff29q\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:50 crc kubenswrapper[4782]: I0202 11:33:50.452163 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:33:51 crc kubenswrapper[4782]: I0202 11:33:51.044596 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:33:51 crc kubenswrapper[4782]: I0202 11:33:51.330229 4782 generic.go:334] "Generic (PLEG): container finished" podID="84d1feed-8d12-41b5-8606-4ac037256f14" containerID="7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7" exitCode=0 Feb 02 11:33:51 crc kubenswrapper[4782]: I0202 11:33:51.330275 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerDied","Data":"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7"} Feb 02 11:33:51 crc kubenswrapper[4782]: I0202 11:33:51.330335 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerStarted","Data":"df451c3fc192c5378c3dad8d8c95604469154b7e53f7269d58d6c31fac3aa873"} Feb 02 11:33:52 crc kubenswrapper[4782]: I0202 11:33:52.341036 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerStarted","Data":"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d"} Feb 02 11:33:53 crc kubenswrapper[4782]: I0202 11:33:53.353743 4782 generic.go:334] "Generic (PLEG): container finished" podID="84d1feed-8d12-41b5-8606-4ac037256f14" containerID="e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d" exitCode=0 Feb 02 11:33:53 crc kubenswrapper[4782]: I0202 11:33:53.353803 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerDied","Data":"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d"} Feb 02 11:33:54 crc kubenswrapper[4782]: I0202 11:33:54.365034 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerStarted","Data":"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769"} Feb 02 11:33:54 crc kubenswrapper[4782]: I0202 11:33:54.395365 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ff29q" podStartSLOduration=1.968060234 podStartE2EDuration="4.395338805s" podCreationTimestamp="2026-02-02 11:33:50 +0000 UTC" firstStartedPulling="2026-02-02 11:33:51.332876343 +0000 UTC m=+3311.217069059" lastFinishedPulling="2026-02-02 11:33:53.760154914 +0000 UTC m=+3313.644347630" observedRunningTime="2026-02-02 11:33:54.388096546 +0000 UTC m=+3314.272289262" watchObservedRunningTime="2026-02-02 11:33:54.395338805 +0000 UTC m=+3314.279531521" Feb 02 11:34:00 crc kubenswrapper[4782]: I0202 11:34:00.453241 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:00 crc kubenswrapper[4782]: I0202 11:34:00.453858 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:00 crc kubenswrapper[4782]: I0202 11:34:00.504322 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:01 crc kubenswrapper[4782]: I0202 11:34:01.499022 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:01 crc kubenswrapper[4782]: I0202 11:34:01.560615 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:34:01 crc kubenswrapper[4782]: I0202 11:34:01.822165 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:01 crc kubenswrapper[4782]: E0202 11:34:01.822518 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:34:03 crc kubenswrapper[4782]: I0202 11:34:03.460759 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ff29q" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="registry-server" containerID="cri-o://677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" gracePeriod=2 Feb 02 11:34:03 crc kubenswrapper[4782]: I0202 11:34:03.932710 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.050618 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") pod \"84d1feed-8d12-41b5-8606-4ac037256f14\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.050994 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") pod \"84d1feed-8d12-41b5-8606-4ac037256f14\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.051119 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") pod \"84d1feed-8d12-41b5-8606-4ac037256f14\" (UID: \"84d1feed-8d12-41b5-8606-4ac037256f14\") " Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.051360 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities" (OuterVolumeSpecName: "utilities") pod "84d1feed-8d12-41b5-8606-4ac037256f14" (UID: "84d1feed-8d12-41b5-8606-4ac037256f14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.052303 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.059902 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n" (OuterVolumeSpecName: "kube-api-access-bxh7n") pod "84d1feed-8d12-41b5-8606-4ac037256f14" (UID: "84d1feed-8d12-41b5-8606-4ac037256f14"). InnerVolumeSpecName "kube-api-access-bxh7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.072389 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84d1feed-8d12-41b5-8606-4ac037256f14" (UID: "84d1feed-8d12-41b5-8606-4ac037256f14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.154015 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d1feed-8d12-41b5-8606-4ac037256f14-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.154249 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxh7n\" (UniqueName: \"kubernetes.io/projected/84d1feed-8d12-41b5-8606-4ac037256f14-kube-api-access-bxh7n\") on node \"crc\" DevicePath \"\"" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470866 4782 generic.go:334] "Generic (PLEG): container finished" podID="84d1feed-8d12-41b5-8606-4ac037256f14" containerID="677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" exitCode=0 Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470906 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerDied","Data":"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769"} Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470932 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ff29q" event={"ID":"84d1feed-8d12-41b5-8606-4ac037256f14","Type":"ContainerDied","Data":"df451c3fc192c5378c3dad8d8c95604469154b7e53f7269d58d6c31fac3aa873"} Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470960 4782 scope.go:117] "RemoveContainer" containerID="677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.470958 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ff29q" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.491770 4782 scope.go:117] "RemoveContainer" containerID="e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.508367 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.521422 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ff29q"] Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.532695 4782 scope.go:117] "RemoveContainer" containerID="7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.566738 4782 scope.go:117] "RemoveContainer" containerID="677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" Feb 02 11:34:04 crc kubenswrapper[4782]: E0202 11:34:04.567236 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769\": container with ID starting with 677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769 not found: ID does not exist" containerID="677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.567266 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769"} err="failed to get container status \"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769\": rpc error: code = NotFound desc = could not find container \"677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769\": container with ID starting with 677c502ab7a7b40227f23e295754903936a463b6096e11cf42e0fef2c8bbd769 not found: ID does not exist" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.567301 4782 scope.go:117] "RemoveContainer" containerID="e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d" Feb 02 11:34:04 crc kubenswrapper[4782]: E0202 11:34:04.567818 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d\": container with ID starting with e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d not found: ID does not exist" containerID="e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.567837 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d"} err="failed to get container status \"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d\": rpc error: code = NotFound desc = could not find container \"e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d\": container with ID starting with e5b200893c2ca101197ca33df78d0350b12bef2fcb7964b0b0af6912ad962a9d not found: ID does not exist" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.567849 4782 scope.go:117] "RemoveContainer" containerID="7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7" Feb 02 11:34:04 crc kubenswrapper[4782]: E0202 11:34:04.568035 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7\": container with ID starting with 7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7 not found: ID does not exist" containerID="7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.568055 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7"} err="failed to get container status \"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7\": rpc error: code = NotFound desc = could not find container \"7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7\": container with ID starting with 7f0df0cc98cdbc6a9bc514d157a3eb78ba3e663f6d31914c5345e85f394b6bc7 not found: ID does not exist" Feb 02 11:34:04 crc kubenswrapper[4782]: I0202 11:34:04.832658 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" path="/var/lib/kubelet/pods/84d1feed-8d12-41b5-8606-4ac037256f14/volumes" Feb 02 11:34:13 crc kubenswrapper[4782]: I0202 11:34:13.822616 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:13 crc kubenswrapper[4782]: E0202 11:34:13.824022 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:34:25 crc kubenswrapper[4782]: I0202 11:34:25.832293 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:25 crc kubenswrapper[4782]: E0202 11:34:25.833423 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:34:39 crc kubenswrapper[4782]: I0202 11:34:39.821407 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:39 crc kubenswrapper[4782]: E0202 11:34:39.822160 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:34:51 crc kubenswrapper[4782]: I0202 11:34:51.821313 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:34:51 crc kubenswrapper[4782]: E0202 11:34:51.822095 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:04 crc kubenswrapper[4782]: I0202 11:35:04.822008 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:35:04 crc kubenswrapper[4782]: E0202 11:35:04.823879 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:19 crc kubenswrapper[4782]: I0202 11:35:19.821018 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:35:19 crc kubenswrapper[4782]: E0202 11:35:19.821848 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:32 crc kubenswrapper[4782]: I0202 11:35:32.821112 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:35:32 crc kubenswrapper[4782]: E0202 11:35:32.822151 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.849781 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:35:35 crc kubenswrapper[4782]: E0202 11:35:35.850833 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="extract-content" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.850850 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="extract-content" Feb 02 11:35:35 crc kubenswrapper[4782]: E0202 11:35:35.850864 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="registry-server" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.850871 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="registry-server" Feb 02 11:35:35 crc kubenswrapper[4782]: E0202 11:35:35.850933 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="extract-utilities" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.850944 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="extract-utilities" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.851185 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d1feed-8d12-41b5-8606-4ac037256f14" containerName="registry-server" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.854278 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:35 crc kubenswrapper[4782]: I0202 11:35:35.860554 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.007147 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.007192 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.007252 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.109951 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.110038 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.110111 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.110743 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.110783 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.140722 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") pod \"redhat-operators-qpwft\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.177011 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:36 crc kubenswrapper[4782]: I0202 11:35:36.813983 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:35:37 crc kubenswrapper[4782]: I0202 11:35:37.329268 4782 generic.go:334] "Generic (PLEG): container finished" podID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerID="757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370" exitCode=0 Feb 02 11:35:37 crc kubenswrapper[4782]: I0202 11:35:37.329449 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerDied","Data":"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370"} Feb 02 11:35:37 crc kubenswrapper[4782]: I0202 11:35:37.329688 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerStarted","Data":"8908b2e444a0e08f2bac365aa9b5a6bf0976250e99f6e5024b4db88b333fe053"} Feb 02 11:35:37 crc kubenswrapper[4782]: I0202 11:35:37.331761 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:35:39 crc kubenswrapper[4782]: I0202 11:35:39.349438 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerStarted","Data":"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378"} Feb 02 11:35:45 crc kubenswrapper[4782]: I0202 11:35:45.399003 4782 generic.go:334] "Generic (PLEG): container finished" podID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerID="376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378" exitCode=0 Feb 02 11:35:45 crc kubenswrapper[4782]: I0202 11:35:45.399105 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerDied","Data":"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378"} Feb 02 11:35:46 crc kubenswrapper[4782]: I0202 11:35:46.410401 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerStarted","Data":"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd"} Feb 02 11:35:46 crc kubenswrapper[4782]: I0202 11:35:46.444949 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qpwft" podStartSLOduration=2.777708008 podStartE2EDuration="11.444924326s" podCreationTimestamp="2026-02-02 11:35:35 +0000 UTC" firstStartedPulling="2026-02-02 11:35:37.331443676 +0000 UTC m=+3417.215636392" lastFinishedPulling="2026-02-02 11:35:45.998659994 +0000 UTC m=+3425.882852710" observedRunningTime="2026-02-02 11:35:46.434062954 +0000 UTC m=+3426.318255680" watchObservedRunningTime="2026-02-02 11:35:46.444924326 +0000 UTC m=+3426.329117042" Feb 02 11:35:46 crc kubenswrapper[4782]: I0202 11:35:46.820831 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:35:46 crc kubenswrapper[4782]: E0202 11:35:46.821301 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:35:56 crc kubenswrapper[4782]: I0202 11:35:56.177849 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:56 crc kubenswrapper[4782]: I0202 11:35:56.178473 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:35:57 crc kubenswrapper[4782]: I0202 11:35:57.233363 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpwft" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" probeResult="failure" output=< Feb 02 11:35:57 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:35:57 crc kubenswrapper[4782]: > Feb 02 11:35:59 crc kubenswrapper[4782]: I0202 11:35:59.822345 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:36:00 crc kubenswrapper[4782]: I0202 11:36:00.528875 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257"} Feb 02 11:36:07 crc kubenswrapper[4782]: I0202 11:36:07.222830 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpwft" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" probeResult="failure" output=< Feb 02 11:36:07 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:36:07 crc kubenswrapper[4782]: > Feb 02 11:36:17 crc kubenswrapper[4782]: I0202 11:36:17.225391 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qpwft" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" probeResult="failure" output=< Feb 02 11:36:17 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:36:17 crc kubenswrapper[4782]: > Feb 02 11:36:26 crc kubenswrapper[4782]: I0202 11:36:26.223759 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:36:26 crc kubenswrapper[4782]: I0202 11:36:26.282320 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:36:26 crc kubenswrapper[4782]: I0202 11:36:26.458737 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:36:27 crc kubenswrapper[4782]: I0202 11:36:27.762424 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qpwft" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" containerID="cri-o://43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" gracePeriod=2 Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.294757 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.494977 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") pod \"cb841e7c-9074-4a9a-92e7-9e65398d733f\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.495186 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") pod \"cb841e7c-9074-4a9a-92e7-9e65398d733f\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.495241 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") pod \"cb841e7c-9074-4a9a-92e7-9e65398d733f\" (UID: \"cb841e7c-9074-4a9a-92e7-9e65398d733f\") " Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.495952 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities" (OuterVolumeSpecName: "utilities") pod "cb841e7c-9074-4a9a-92e7-9e65398d733f" (UID: "cb841e7c-9074-4a9a-92e7-9e65398d733f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:36:28 crc kubenswrapper[4782]: I0202 11:36:28.597276 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.065380 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m" (OuterVolumeSpecName: "kube-api-access-rsk5m") pod "cb841e7c-9074-4a9a-92e7-9e65398d733f" (UID: "cb841e7c-9074-4a9a-92e7-9e65398d733f"). InnerVolumeSpecName "kube-api-access-rsk5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.082350 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsk5m\" (UniqueName: \"kubernetes.io/projected/cb841e7c-9074-4a9a-92e7-9e65398d733f-kube-api-access-rsk5m\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.103467 4782 generic.go:334] "Generic (PLEG): container finished" podID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerID="43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" exitCode=0 Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.103852 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qpwft" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.142055 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerDied","Data":"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd"} Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.142122 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qpwft" event={"ID":"cb841e7c-9074-4a9a-92e7-9e65398d733f","Type":"ContainerDied","Data":"8908b2e444a0e08f2bac365aa9b5a6bf0976250e99f6e5024b4db88b333fe053"} Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.142167 4782 scope.go:117] "RemoveContainer" containerID="43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.181392 4782 scope.go:117] "RemoveContainer" containerID="376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.235479 4782 scope.go:117] "RemoveContainer" containerID="757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.256249 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb841e7c-9074-4a9a-92e7-9e65398d733f" (UID: "cb841e7c-9074-4a9a-92e7-9e65398d733f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.272930 4782 scope.go:117] "RemoveContainer" containerID="43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" Feb 02 11:36:29 crc kubenswrapper[4782]: E0202 11:36:29.273356 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd\": container with ID starting with 43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd not found: ID does not exist" containerID="43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.273490 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd"} err="failed to get container status \"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd\": rpc error: code = NotFound desc = could not find container \"43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd\": container with ID starting with 43a51739fe7883c3335a2e48b5d26c6e4cee825dc87cb3038b52a6c3381231fd not found: ID does not exist" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.273581 4782 scope.go:117] "RemoveContainer" containerID="376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378" Feb 02 11:36:29 crc kubenswrapper[4782]: E0202 11:36:29.274264 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378\": container with ID starting with 376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378 not found: ID does not exist" containerID="376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.274306 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378"} err="failed to get container status \"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378\": rpc error: code = NotFound desc = could not find container \"376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378\": container with ID starting with 376c77af45c37d87a76379e7ef70491a167b9a3177519ad671dda580a11ee378 not found: ID does not exist" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.274334 4782 scope.go:117] "RemoveContainer" containerID="757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370" Feb 02 11:36:29 crc kubenswrapper[4782]: E0202 11:36:29.274555 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370\": container with ID starting with 757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370 not found: ID does not exist" containerID="757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.274583 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370"} err="failed to get container status \"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370\": rpc error: code = NotFound desc = could not find container \"757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370\": container with ID starting with 757570cedf3cf80d21c0bd11c652aecdd008fffebea48e6f26fad0a56a22b370 not found: ID does not exist" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.295896 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb841e7c-9074-4a9a-92e7-9e65398d733f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.439590 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:36:29 crc kubenswrapper[4782]: I0202 11:36:29.449607 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qpwft"] Feb 02 11:36:30 crc kubenswrapper[4782]: I0202 11:36:30.832255 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" path="/var/lib/kubelet/pods/cb841e7c-9074-4a9a-92e7-9e65398d733f/volumes" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.618221 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:43 crc kubenswrapper[4782]: E0202 11:37:43.619321 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="extract-content" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.619338 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="extract-content" Feb 02 11:37:43 crc kubenswrapper[4782]: E0202 11:37:43.619377 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.619386 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" Feb 02 11:37:43 crc kubenswrapper[4782]: E0202 11:37:43.619401 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="extract-utilities" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.619409 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="extract-utilities" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.619643 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb841e7c-9074-4a9a-92e7-9e65398d733f" containerName="registry-server" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.621371 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.646552 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.720840 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.720971 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.721033 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.822497 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.822580 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.822680 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.823168 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.823635 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.847889 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") pod \"certified-operators-86gp4\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:43 crc kubenswrapper[4782]: I0202 11:37:43.944289 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:44 crc kubenswrapper[4782]: I0202 11:37:44.533749 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:44 crc kubenswrapper[4782]: I0202 11:37:44.765872 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerStarted","Data":"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393"} Feb 02 11:37:44 crc kubenswrapper[4782]: I0202 11:37:44.766285 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerStarted","Data":"54a25b0eb13a9b35776a71dc8e3b2edb4139cd89b91639b3e51b9a49d169da6d"} Feb 02 11:37:45 crc kubenswrapper[4782]: I0202 11:37:45.777384 4782 generic.go:334] "Generic (PLEG): container finished" podID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerID="008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393" exitCode=0 Feb 02 11:37:45 crc kubenswrapper[4782]: I0202 11:37:45.777428 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerDied","Data":"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393"} Feb 02 11:37:46 crc kubenswrapper[4782]: I0202 11:37:46.790453 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerStarted","Data":"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1"} Feb 02 11:37:48 crc kubenswrapper[4782]: I0202 11:37:48.814741 4782 generic.go:334] "Generic (PLEG): container finished" podID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerID="c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1" exitCode=0 Feb 02 11:37:48 crc kubenswrapper[4782]: I0202 11:37:48.814814 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerDied","Data":"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1"} Feb 02 11:37:49 crc kubenswrapper[4782]: I0202 11:37:49.840712 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerStarted","Data":"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd"} Feb 02 11:37:49 crc kubenswrapper[4782]: I0202 11:37:49.869197 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86gp4" podStartSLOduration=3.430284627 podStartE2EDuration="6.869173441s" podCreationTimestamp="2026-02-02 11:37:43 +0000 UTC" firstStartedPulling="2026-02-02 11:37:45.78089126 +0000 UTC m=+3545.665083976" lastFinishedPulling="2026-02-02 11:37:49.219780074 +0000 UTC m=+3549.103972790" observedRunningTime="2026-02-02 11:37:49.864023433 +0000 UTC m=+3549.748216169" watchObservedRunningTime="2026-02-02 11:37:49.869173441 +0000 UTC m=+3549.753366157" Feb 02 11:37:53 crc kubenswrapper[4782]: I0202 11:37:53.944713 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:53 crc kubenswrapper[4782]: I0202 11:37:53.945338 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:54 crc kubenswrapper[4782]: I0202 11:37:54.002767 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:54 crc kubenswrapper[4782]: I0202 11:37:54.940183 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:54 crc kubenswrapper[4782]: I0202 11:37:54.991450 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:56 crc kubenswrapper[4782]: I0202 11:37:56.900151 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86gp4" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="registry-server" containerID="cri-o://c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" gracePeriod=2 Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.466821 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.515859 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") pod \"22bd520b-44a1-48ea-8b4e-dc5aff206551\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.515990 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") pod \"22bd520b-44a1-48ea-8b4e-dc5aff206551\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.516158 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") pod \"22bd520b-44a1-48ea-8b4e-dc5aff206551\" (UID: \"22bd520b-44a1-48ea-8b4e-dc5aff206551\") " Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.518858 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities" (OuterVolumeSpecName: "utilities") pod "22bd520b-44a1-48ea-8b4e-dc5aff206551" (UID: "22bd520b-44a1-48ea-8b4e-dc5aff206551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.550916 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv" (OuterVolumeSpecName: "kube-api-access-xpnzv") pod "22bd520b-44a1-48ea-8b4e-dc5aff206551" (UID: "22bd520b-44a1-48ea-8b4e-dc5aff206551"). InnerVolumeSpecName "kube-api-access-xpnzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.591500 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22bd520b-44a1-48ea-8b4e-dc5aff206551" (UID: "22bd520b-44a1-48ea-8b4e-dc5aff206551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.618568 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.618606 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22bd520b-44a1-48ea-8b4e-dc5aff206551-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.618617 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpnzv\" (UniqueName: \"kubernetes.io/projected/22bd520b-44a1-48ea-8b4e-dc5aff206551-kube-api-access-xpnzv\") on node \"crc\" DevicePath \"\"" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910130 4782 generic.go:334] "Generic (PLEG): container finished" podID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerID="c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" exitCode=0 Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910176 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerDied","Data":"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd"} Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910215 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86gp4" event={"ID":"22bd520b-44a1-48ea-8b4e-dc5aff206551","Type":"ContainerDied","Data":"54a25b0eb13a9b35776a71dc8e3b2edb4139cd89b91639b3e51b9a49d169da6d"} Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910186 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86gp4" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.910243 4782 scope.go:117] "RemoveContainer" containerID="c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.936914 4782 scope.go:117] "RemoveContainer" containerID="c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.971980 4782 scope.go:117] "RemoveContainer" containerID="008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393" Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.975051 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:57 crc kubenswrapper[4782]: I0202 11:37:57.986591 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86gp4"] Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.021032 4782 scope.go:117] "RemoveContainer" containerID="c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" Feb 02 11:37:58 crc kubenswrapper[4782]: E0202 11:37:58.024060 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd\": container with ID starting with c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd not found: ID does not exist" containerID="c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.024213 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd"} err="failed to get container status \"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd\": rpc error: code = NotFound desc = could not find container \"c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd\": container with ID starting with c35111a03e5cee29e68dd26e7e8926691546f2ad453cff51a2e09705bd3c0cdd not found: ID does not exist" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.024377 4782 scope.go:117] "RemoveContainer" containerID="c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1" Feb 02 11:37:58 crc kubenswrapper[4782]: E0202 11:37:58.024996 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1\": container with ID starting with c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1 not found: ID does not exist" containerID="c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.025034 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1"} err="failed to get container status \"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1\": rpc error: code = NotFound desc = could not find container \"c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1\": container with ID starting with c7d2723522e8f782f70311f6d9425e3b4581662baab51c038f5aa09cd7ac14d1 not found: ID does not exist" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.025063 4782 scope.go:117] "RemoveContainer" containerID="008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393" Feb 02 11:37:58 crc kubenswrapper[4782]: E0202 11:37:58.025540 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393\": container with ID starting with 008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393 not found: ID does not exist" containerID="008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.025619 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393"} err="failed to get container status \"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393\": rpc error: code = NotFound desc = could not find container \"008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393\": container with ID starting with 008ddb2ede48bb5ad7005f0fac43d52dcb03f84ec679fd585460dfe98e417393 not found: ID does not exist" Feb 02 11:37:58 crc kubenswrapper[4782]: I0202 11:37:58.844321 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" path="/var/lib/kubelet/pods/22bd520b-44a1-48ea-8b4e-dc5aff206551/volumes" Feb 02 11:38:22 crc kubenswrapper[4782]: I0202 11:38:22.951590 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:38:22 crc kubenswrapper[4782]: I0202 11:38:22.952317 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:38:52 crc kubenswrapper[4782]: I0202 11:38:52.951567 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:38:52 crc kubenswrapper[4782]: I0202 11:38:52.952848 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.951295 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.951983 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.952040 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.952906 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:39:22 crc kubenswrapper[4782]: I0202 11:39:22.952965 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257" gracePeriod=600 Feb 02 11:39:23 crc kubenswrapper[4782]: I0202 11:39:23.724250 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257" exitCode=0 Feb 02 11:39:23 crc kubenswrapper[4782]: I0202 11:39:23.724806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257"} Feb 02 11:39:23 crc kubenswrapper[4782]: I0202 11:39:23.724840 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6"} Feb 02 11:39:23 crc kubenswrapper[4782]: I0202 11:39:23.724858 4782 scope.go:117] "RemoveContainer" containerID="0f610e1fc5d774ae98e6427843ebdfbe622219e84034ddfd24bafe67b92e53a2" Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.049693 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.061480 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.070831 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-61e9-account-create-update-vjlvv"] Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.079071 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-88lt6"] Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.833369 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7260512c-a397-4b18-ab4d-a97e7dbf50d9" path="/var/lib/kubelet/pods/7260512c-a397-4b18-ab4d-a97e7dbf50d9/volumes" Feb 02 11:39:50 crc kubenswrapper[4782]: I0202 11:39:50.835409 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a2fa32-7949-4dbe-8e51-49627e08f051" path="/var/lib/kubelet/pods/d9a2fa32-7949-4dbe-8e51-49627e08f051/volumes" Feb 02 11:39:55 crc kubenswrapper[4782]: I0202 11:39:55.098800 4782 scope.go:117] "RemoveContainer" containerID="18235f2d52d1acb53abcc5d69239ea08135f49af22250cec7c915e6b6af27b05" Feb 02 11:39:55 crc kubenswrapper[4782]: I0202 11:39:55.155899 4782 scope.go:117] "RemoveContainer" containerID="22ee95619a6ae6669166c5388f7644833a8f50918632409a8660abf992c5c5da" Feb 02 11:40:44 crc kubenswrapper[4782]: I0202 11:40:44.040231 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:40:44 crc kubenswrapper[4782]: I0202 11:40:44.051350 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-p6nkb"] Feb 02 11:40:44 crc kubenswrapper[4782]: I0202 11:40:44.833447 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45fc51f-4efe-4cbf-9539-d858ac3c2e73" path="/var/lib/kubelet/pods/f45fc51f-4efe-4cbf-9539-d858ac3c2e73/volumes" Feb 02 11:40:55 crc kubenswrapper[4782]: I0202 11:40:55.265309 4782 scope.go:117] "RemoveContainer" containerID="c033e06590ed48930855476f355d38330fd5900d1d6d3cdf6a14188571b721f2" Feb 02 11:41:52 crc kubenswrapper[4782]: I0202 11:41:52.950950 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:41:52 crc kubenswrapper[4782]: I0202 11:41:52.951386 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:42:22 crc kubenswrapper[4782]: I0202 11:42:22.951126 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:42:22 crc kubenswrapper[4782]: I0202 11:42:22.951692 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.951972 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.953022 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.953410 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.954800 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:42:52 crc kubenswrapper[4782]: I0202 11:42:52.954876 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" gracePeriod=600 Feb 02 11:42:53 crc kubenswrapper[4782]: E0202 11:42:53.096002 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:42:53 crc kubenswrapper[4782]: I0202 11:42:53.520350 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" exitCode=0 Feb 02 11:42:53 crc kubenswrapper[4782]: I0202 11:42:53.520419 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6"} Feb 02 11:42:53 crc kubenswrapper[4782]: I0202 11:42:53.520991 4782 scope.go:117] "RemoveContainer" containerID="e4146ee7483fb1b799415c8adb5be7703998921bbdca2692a753a4e0f1072257" Feb 02 11:42:53 crc kubenswrapper[4782]: I0202 11:42:53.521681 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:42:53 crc kubenswrapper[4782]: E0202 11:42:53.521983 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:06 crc kubenswrapper[4782]: I0202 11:43:06.821978 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:06 crc kubenswrapper[4782]: E0202 11:43:06.822803 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:19 crc kubenswrapper[4782]: I0202 11:43:19.821380 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:19 crc kubenswrapper[4782]: E0202 11:43:19.823048 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:31 crc kubenswrapper[4782]: I0202 11:43:31.821277 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:31 crc kubenswrapper[4782]: E0202 11:43:31.821942 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:44 crc kubenswrapper[4782]: I0202 11:43:44.821663 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:44 crc kubenswrapper[4782]: E0202 11:43:44.822520 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:43:56 crc kubenswrapper[4782]: I0202 11:43:56.820922 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:43:56 crc kubenswrapper[4782]: E0202 11:43:56.821786 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:08 crc kubenswrapper[4782]: I0202 11:44:08.827979 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:08 crc kubenswrapper[4782]: E0202 11:44:08.828822 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:21 crc kubenswrapper[4782]: I0202 11:44:21.822481 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:21 crc kubenswrapper[4782]: E0202 11:44:21.823570 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:34 crc kubenswrapper[4782]: I0202 11:44:34.821553 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:34 crc kubenswrapper[4782]: E0202 11:44:34.822499 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.998280 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:36 crc kubenswrapper[4782]: E0202 11:44:36.999051 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="extract-utilities" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.999064 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="extract-utilities" Feb 02 11:44:36 crc kubenswrapper[4782]: E0202 11:44:36.999090 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="extract-content" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.999096 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="extract-content" Feb 02 11:44:36 crc kubenswrapper[4782]: E0202 11:44:36.999103 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="registry-server" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.999109 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="registry-server" Feb 02 11:44:36 crc kubenswrapper[4782]: I0202 11:44:36.999287 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="22bd520b-44a1-48ea-8b4e-dc5aff206551" containerName="registry-server" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.000773 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.018963 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.114347 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.114404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.114487 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.216684 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.216724 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.216762 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.217262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.217350 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.239087 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") pod \"redhat-marketplace-xj6cn\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.319489 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:37 crc kubenswrapper[4782]: I0202 11:44:37.921959 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:38 crc kubenswrapper[4782]: I0202 11:44:38.423165 4782 generic.go:334] "Generic (PLEG): container finished" podID="6102134b-c682-49ac-abbb-1303c639d46b" containerID="0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70" exitCode=0 Feb 02 11:44:38 crc kubenswrapper[4782]: I0202 11:44:38.423261 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerDied","Data":"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70"} Feb 02 11:44:38 crc kubenswrapper[4782]: I0202 11:44:38.423450 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerStarted","Data":"ab539b6f75c611534c7284d3bea63be36fdfd3b06fe7625af9cda86dd231d758"} Feb 02 11:44:38 crc kubenswrapper[4782]: I0202 11:44:38.425423 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:44:40 crc kubenswrapper[4782]: I0202 11:44:40.440111 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerStarted","Data":"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a"} Feb 02 11:44:41 crc kubenswrapper[4782]: I0202 11:44:41.461126 4782 generic.go:334] "Generic (PLEG): container finished" podID="6102134b-c682-49ac-abbb-1303c639d46b" containerID="61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a" exitCode=0 Feb 02 11:44:41 crc kubenswrapper[4782]: I0202 11:44:41.461445 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerDied","Data":"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a"} Feb 02 11:44:42 crc kubenswrapper[4782]: I0202 11:44:42.471684 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerStarted","Data":"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf"} Feb 02 11:44:42 crc kubenswrapper[4782]: I0202 11:44:42.496102 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xj6cn" podStartSLOduration=3.0859154 podStartE2EDuration="6.496081951s" podCreationTimestamp="2026-02-02 11:44:36 +0000 UTC" firstStartedPulling="2026-02-02 11:44:38.425136591 +0000 UTC m=+3958.309329297" lastFinishedPulling="2026-02-02 11:44:41.835303132 +0000 UTC m=+3961.719495848" observedRunningTime="2026-02-02 11:44:42.492252051 +0000 UTC m=+3962.376444787" watchObservedRunningTime="2026-02-02 11:44:42.496081951 +0000 UTC m=+3962.380274667" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.320797 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.321437 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.381889 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.565405 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.630191 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:47 crc kubenswrapper[4782]: I0202 11:44:47.821977 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:47 crc kubenswrapper[4782]: E0202 11:44:47.822229 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:44:49 crc kubenswrapper[4782]: I0202 11:44:49.528329 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xj6cn" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="registry-server" containerID="cri-o://4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" gracePeriod=2 Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.239863 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.351013 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") pod \"6102134b-c682-49ac-abbb-1303c639d46b\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.351101 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") pod \"6102134b-c682-49ac-abbb-1303c639d46b\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.351172 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") pod \"6102134b-c682-49ac-abbb-1303c639d46b\" (UID: \"6102134b-c682-49ac-abbb-1303c639d46b\") " Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.352174 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities" (OuterVolumeSpecName: "utilities") pod "6102134b-c682-49ac-abbb-1303c639d46b" (UID: "6102134b-c682-49ac-abbb-1303c639d46b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.356880 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg" (OuterVolumeSpecName: "kube-api-access-nv7bg") pod "6102134b-c682-49ac-abbb-1303c639d46b" (UID: "6102134b-c682-49ac-abbb-1303c639d46b"). InnerVolumeSpecName "kube-api-access-nv7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.454109 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv7bg\" (UniqueName: \"kubernetes.io/projected/6102134b-c682-49ac-abbb-1303c639d46b-kube-api-access-nv7bg\") on node \"crc\" DevicePath \"\"" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.454152 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.540293 4782 generic.go:334] "Generic (PLEG): container finished" podID="6102134b-c682-49ac-abbb-1303c639d46b" containerID="4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" exitCode=0 Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.540356 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj6cn" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.540371 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerDied","Data":"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf"} Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.541884 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj6cn" event={"ID":"6102134b-c682-49ac-abbb-1303c639d46b","Type":"ContainerDied","Data":"ab539b6f75c611534c7284d3bea63be36fdfd3b06fe7625af9cda86dd231d758"} Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.541916 4782 scope.go:117] "RemoveContainer" containerID="4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.567795 4782 scope.go:117] "RemoveContainer" containerID="61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.587586 4782 scope.go:117] "RemoveContainer" containerID="0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.634027 4782 scope.go:117] "RemoveContainer" containerID="4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" Feb 02 11:44:50 crc kubenswrapper[4782]: E0202 11:44:50.635169 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf\": container with ID starting with 4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf not found: ID does not exist" containerID="4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.635273 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf"} err="failed to get container status \"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf\": rpc error: code = NotFound desc = could not find container \"4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf\": container with ID starting with 4c5e74498f064cb2c2f120d2ee93a91656d1706ed3936cf7861793a1488b0caf not found: ID does not exist" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.635353 4782 scope.go:117] "RemoveContainer" containerID="61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a" Feb 02 11:44:50 crc kubenswrapper[4782]: E0202 11:44:50.635798 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a\": container with ID starting with 61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a not found: ID does not exist" containerID="61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.635874 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a"} err="failed to get container status \"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a\": rpc error: code = NotFound desc = could not find container \"61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a\": container with ID starting with 61d22aa41947367ff8ffcb32d18745f71a2d75e4cacb4eae25ce9a4f4c30f11a not found: ID does not exist" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.635936 4782 scope.go:117] "RemoveContainer" containerID="0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70" Feb 02 11:44:50 crc kubenswrapper[4782]: E0202 11:44:50.636406 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70\": container with ID starting with 0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70 not found: ID does not exist" containerID="0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.636493 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70"} err="failed to get container status \"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70\": rpc error: code = NotFound desc = could not find container \"0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70\": container with ID starting with 0b6b96af2ed3c898e70824a286e7e6c137b73b475c3eb7d20ff748e4eb854e70 not found: ID does not exist" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.683236 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6102134b-c682-49ac-abbb-1303c639d46b" (UID: "6102134b-c682-49ac-abbb-1303c639d46b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.760939 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6102134b-c682-49ac-abbb-1303c639d46b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.865298 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:50 crc kubenswrapper[4782]: I0202 11:44:50.874724 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj6cn"] Feb 02 11:44:52 crc kubenswrapper[4782]: I0202 11:44:52.829949 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6102134b-c682-49ac-abbb-1303c639d46b" path="/var/lib/kubelet/pods/6102134b-c682-49ac-abbb-1303c639d46b/volumes" Feb 02 11:44:58 crc kubenswrapper[4782]: I0202 11:44:58.822209 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:44:58 crc kubenswrapper[4782]: E0202 11:44:58.823325 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.206608 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh"] Feb 02 11:45:00 crc kubenswrapper[4782]: E0202 11:45:00.207472 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.207489 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="extract-content" Feb 02 11:45:00 crc kubenswrapper[4782]: E0202 11:45:00.207507 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.207514 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4782]: E0202 11:45:00.207534 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.207542 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="extract-utilities" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.207798 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="6102134b-c682-49ac-abbb-1303c639d46b" containerName="registry-server" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.208656 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.217583 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh"] Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.219217 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.220565 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.257499 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.257564 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.257629 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.359757 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.360113 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.361357 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.361237 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.367353 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.379332 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") pod \"collect-profiles-29500545-zknsh\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:00 crc kubenswrapper[4782]: I0202 11:45:00.545761 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:01 crc kubenswrapper[4782]: I0202 11:45:01.049873 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh"] Feb 02 11:45:01 crc kubenswrapper[4782]: I0202 11:45:01.635378 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" event={"ID":"469bd464-b4f1-401d-be7b-da5ac0b089d2","Type":"ContainerStarted","Data":"07b7c95a2fa05599c5b60c9b1ce739b2f37878424c049f33791b40d9ef8205ed"} Feb 02 11:45:01 crc kubenswrapper[4782]: I0202 11:45:01.635435 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" event={"ID":"469bd464-b4f1-401d-be7b-da5ac0b089d2","Type":"ContainerStarted","Data":"49c5d4d36480270abd39a76802588a8854bb59f652df61c4eb71175652142494"} Feb 02 11:45:01 crc kubenswrapper[4782]: I0202 11:45:01.664558 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" podStartSLOduration=1.664531824 podStartE2EDuration="1.664531824s" podCreationTimestamp="2026-02-02 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:45:01.655427832 +0000 UTC m=+3981.539620548" watchObservedRunningTime="2026-02-02 11:45:01.664531824 +0000 UTC m=+3981.548724540" Feb 02 11:45:02 crc kubenswrapper[4782]: I0202 11:45:02.646167 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" event={"ID":"469bd464-b4f1-401d-be7b-da5ac0b089d2","Type":"ContainerDied","Data":"07b7c95a2fa05599c5b60c9b1ce739b2f37878424c049f33791b40d9ef8205ed"} Feb 02 11:45:02 crc kubenswrapper[4782]: I0202 11:45:02.646035 4782 generic.go:334] "Generic (PLEG): container finished" podID="469bd464-b4f1-401d-be7b-da5ac0b089d2" containerID="07b7c95a2fa05599c5b60c9b1ce739b2f37878424c049f33791b40d9ef8205ed" exitCode=0 Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.304674 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.349854 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") pod \"469bd464-b4f1-401d-be7b-da5ac0b089d2\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.349938 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") pod \"469bd464-b4f1-401d-be7b-da5ac0b089d2\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.350042 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") pod \"469bd464-b4f1-401d-be7b-da5ac0b089d2\" (UID: \"469bd464-b4f1-401d-be7b-da5ac0b089d2\") " Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.351322 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume" (OuterVolumeSpecName: "config-volume") pod "469bd464-b4f1-401d-be7b-da5ac0b089d2" (UID: "469bd464-b4f1-401d-be7b-da5ac0b089d2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.358945 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "469bd464-b4f1-401d-be7b-da5ac0b089d2" (UID: "469bd464-b4f1-401d-be7b-da5ac0b089d2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.359272 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n" (OuterVolumeSpecName: "kube-api-access-jll4n") pod "469bd464-b4f1-401d-be7b-da5ac0b089d2" (UID: "469bd464-b4f1-401d-be7b-da5ac0b089d2"). InnerVolumeSpecName "kube-api-access-jll4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.451955 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/469bd464-b4f1-401d-be7b-da5ac0b089d2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.451998 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/469bd464-b4f1-401d-be7b-da5ac0b089d2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.452009 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jll4n\" (UniqueName: \"kubernetes.io/projected/469bd464-b4f1-401d-be7b-da5ac0b089d2-kube-api-access-jll4n\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.664613 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" event={"ID":"469bd464-b4f1-401d-be7b-da5ac0b089d2","Type":"ContainerDied","Data":"49c5d4d36480270abd39a76802588a8854bb59f652df61c4eb71175652142494"} Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.665021 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49c5d4d36480270abd39a76802588a8854bb59f652df61c4eb71175652142494" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.664684 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500545-zknsh" Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.741269 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.749846 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500500-5d8bv"] Feb 02 11:45:04 crc kubenswrapper[4782]: I0202 11:45:04.835537 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ac376d-42fd-424f-a1bf-281bd9c9d31f" path="/var/lib/kubelet/pods/62ac376d-42fd-424f-a1bf-281bd9c9d31f/volumes" Feb 02 11:45:11 crc kubenswrapper[4782]: I0202 11:45:11.821262 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:45:11 crc kubenswrapper[4782]: E0202 11:45:11.822224 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:23 crc kubenswrapper[4782]: I0202 11:45:23.821135 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:45:23 crc kubenswrapper[4782]: E0202 11:45:23.821827 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:27 crc kubenswrapper[4782]: I0202 11:45:27.869133 4782 generic.go:334] "Generic (PLEG): container finished" podID="a5a266a5-ac00-49e1-9443-def4cebe65ad" containerID="762b4b5b8e241a2a3f60ed6176e6adc0554b048edbcf9782fa78026e47e66f14" exitCode=0 Feb 02 11:45:27 crc kubenswrapper[4782]: I0202 11:45:27.869215 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a5a266a5-ac00-49e1-9443-def4cebe65ad","Type":"ContainerDied","Data":"762b4b5b8e241a2a3f60ed6176e6adc0554b048edbcf9782fa78026e47e66f14"} Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.325470 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.357721 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.357803 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.357859 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.357917 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358022 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358065 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358102 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358156 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.358183 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") pod \"a5a266a5-ac00-49e1-9443-def4cebe65ad\" (UID: \"a5a266a5-ac00-49e1-9443-def4cebe65ad\") " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.370460 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data" (OuterVolumeSpecName: "config-data") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.371040 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.380376 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.384176 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.394535 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9" (OuterVolumeSpecName: "kube-api-access-r8gp9") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "kube-api-access-r8gp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.400299 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.406951 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.409253 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.438435 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a5a266a5-ac00-49e1-9443-def4cebe65ad" (UID: "a5a266a5-ac00-49e1-9443-def4cebe65ad"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.460854 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.460905 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463782 4782 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463820 4782 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463835 4782 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463847 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gp9\" (UniqueName: \"kubernetes.io/projected/a5a266a5-ac00-49e1-9443-def4cebe65ad-kube-api-access-r8gp9\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463863 4782 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463874 4782 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a5a266a5-ac00-49e1-9443-def4cebe65ad-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.463887 4782 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a5a266a5-ac00-49e1-9443-def4cebe65ad-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.483960 4782 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.565111 4782 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.886405 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a5a266a5-ac00-49e1-9443-def4cebe65ad","Type":"ContainerDied","Data":"165e18b6fa145b8da7ea6159f1baabb2c9b6bfa2ecbd382cecfe714965ca36c1"} Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.886442 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165e18b6fa145b8da7ea6159f1baabb2c9b6bfa2ecbd382cecfe714965ca36c1" Feb 02 11:45:29 crc kubenswrapper[4782]: I0202 11:45:29.886471 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 11:45:35 crc kubenswrapper[4782]: I0202 11:45:35.822329 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:45:35 crc kubenswrapper[4782]: E0202 11:45:35.823109 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.132872 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 11:45:38 crc kubenswrapper[4782]: E0202 11:45:38.133673 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469bd464-b4f1-401d-be7b-da5ac0b089d2" containerName="collect-profiles" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.133691 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="469bd464-b4f1-401d-be7b-da5ac0b089d2" containerName="collect-profiles" Feb 02 11:45:38 crc kubenswrapper[4782]: E0202 11:45:38.133712 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" containerName="tempest-tests-tempest-tests-runner" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.133722 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" containerName="tempest-tests-tempest-tests-runner" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.133942 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="469bd464-b4f1-401d-be7b-da5ac0b089d2" containerName="collect-profiles" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.133964 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a266a5-ac00-49e1-9443-def4cebe65ad" containerName="tempest-tests-tempest-tests-runner" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.134751 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.137758 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvl62" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.143822 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.238270 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.238411 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9t94\" (UniqueName: \"kubernetes.io/projected/0a460d0d-7c4a-473e-9df8-ca1b1979cb25-kube-api-access-j9t94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.339501 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9t94\" (UniqueName: \"kubernetes.io/projected/0a460d0d-7c4a-473e-9df8-ca1b1979cb25-kube-api-access-j9t94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.339687 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.340210 4782 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.361726 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9t94\" (UniqueName: \"kubernetes.io/projected/0a460d0d-7c4a-473e-9df8-ca1b1979cb25-kube-api-access-j9t94\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.365382 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0a460d0d-7c4a-473e-9df8-ca1b1979cb25\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.465986 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.941110 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 11:45:38 crc kubenswrapper[4782]: I0202 11:45:38.983755 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0a460d0d-7c4a-473e-9df8-ca1b1979cb25","Type":"ContainerStarted","Data":"1a87b80a2722e3883477d932a43fa5d226c4d3362eed3de1f3bdaabe855b8647"} Feb 02 11:45:41 crc kubenswrapper[4782]: I0202 11:45:41.005343 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0a460d0d-7c4a-473e-9df8-ca1b1979cb25","Type":"ContainerStarted","Data":"4bfeaf59f150cc47de2c37f54aa1da64348bb0f6d81b685a8aefaf8621e99b95"} Feb 02 11:45:41 crc kubenswrapper[4782]: I0202 11:45:41.024482 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.536651175 podStartE2EDuration="3.024465047s" podCreationTimestamp="2026-02-02 11:45:38 +0000 UTC" firstStartedPulling="2026-02-02 11:45:38.951582995 +0000 UTC m=+4018.835775711" lastFinishedPulling="2026-02-02 11:45:40.439396867 +0000 UTC m=+4020.323589583" observedRunningTime="2026-02-02 11:45:41.021486701 +0000 UTC m=+4020.905679427" watchObservedRunningTime="2026-02-02 11:45:41.024465047 +0000 UTC m=+4020.908657763" Feb 02 11:45:48 crc kubenswrapper[4782]: I0202 11:45:48.820844 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:45:48 crc kubenswrapper[4782]: E0202 11:45:48.821592 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:45:55 crc kubenswrapper[4782]: I0202 11:45:55.428299 4782 scope.go:117] "RemoveContainer" containerID="a290ebd90dc2cdcb55f14cdbbbcabca2eb0ae3e2b4fabd92e76c199c11dd8634" Feb 02 11:46:02 crc kubenswrapper[4782]: I0202 11:46:02.821231 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:02 crc kubenswrapper[4782]: E0202 11:46:02.822060 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.098391 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.100806 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.111128 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.118969 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jxb7c"/"kube-root-ca.crt" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.119275 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jxb7c"/"openshift-service-ca.crt" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.123461 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jxb7c"/"default-dockercfg-l85z4" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.215903 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.216081 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.318510 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.318695 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.319169 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.346185 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") pod \"must-gather-z6fv2\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.422241 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:46:05 crc kubenswrapper[4782]: I0202 11:46:05.907850 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:46:06 crc kubenswrapper[4782]: I0202 11:46:06.226888 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" event={"ID":"2e9c98b4-2bbb-4602-895c-b5e75a84008e","Type":"ContainerStarted","Data":"350a73414a44139cf56d3f302a04e19f6fa7172fa0d1cce7dcf590751826e7f0"} Feb 02 11:46:12 crc kubenswrapper[4782]: I0202 11:46:12.307007 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" event={"ID":"2e9c98b4-2bbb-4602-895c-b5e75a84008e","Type":"ContainerStarted","Data":"834b1f7bf9c6e2dad97c21bc571b1ce20a5f5b1236b41db9e26abc8b7d95977d"} Feb 02 11:46:12 crc kubenswrapper[4782]: I0202 11:46:12.307633 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" event={"ID":"2e9c98b4-2bbb-4602-895c-b5e75a84008e","Type":"ContainerStarted","Data":"6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394"} Feb 02 11:46:12 crc kubenswrapper[4782]: I0202 11:46:12.333247 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" podStartSLOduration=1.9516449420000002 podStartE2EDuration="7.333218773s" podCreationTimestamp="2026-02-02 11:46:05 +0000 UTC" firstStartedPulling="2026-02-02 11:46:05.915266142 +0000 UTC m=+4045.799458858" lastFinishedPulling="2026-02-02 11:46:11.296839973 +0000 UTC m=+4051.181032689" observedRunningTime="2026-02-02 11:46:12.321949628 +0000 UTC m=+4052.206142364" watchObservedRunningTime="2026-02-02 11:46:12.333218773 +0000 UTC m=+4052.217411489" Feb 02 11:46:13 crc kubenswrapper[4782]: I0202 11:46:13.820921 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:13 crc kubenswrapper[4782]: E0202 11:46:13.821541 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:16 crc kubenswrapper[4782]: E0202 11:46:16.863149 4782 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.147:57964->38.102.83.147:40373: read tcp 38.102.83.147:57964->38.102.83.147:40373: read: connection reset by peer Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.171808 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-phbvf"] Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.176858 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.237448 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.238160 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.340535 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.340703 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.341102 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.371798 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") pod \"crc-debug-phbvf\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:19 crc kubenswrapper[4782]: I0202 11:46:19.503947 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:46:20 crc kubenswrapper[4782]: I0202 11:46:20.372346 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" event={"ID":"7ab47ea7-89b6-4fb1-b663-5e4e26a19975","Type":"ContainerStarted","Data":"dec97e17aa8cc0816aab9519446752a1eb1800085d33b772d42030a1969af361"} Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.582444 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.585926 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.627549 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.700856 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.700930 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.701120 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.802817 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.802892 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.803142 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.805235 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.807936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.836746 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") pod \"redhat-operators-ghfkv\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:25 crc kubenswrapper[4782]: I0202 11:46:25.907153 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:27 crc kubenswrapper[4782]: I0202 11:46:27.821023 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:27 crc kubenswrapper[4782]: E0202 11:46:27.821816 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.257491 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:46:34 crc kubenswrapper[4782]: W0202 11:46:34.263480 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46a2322e_7ca2_41f9_90af_bca1a4a7c157.slice/crio-c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3 WatchSource:0}: Error finding container c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3: Status 404 returned error can't find the container with id c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3 Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.532277 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" event={"ID":"7ab47ea7-89b6-4fb1-b663-5e4e26a19975","Type":"ContainerStarted","Data":"703fb5905ac3fac00de5179c176a56d6c1ad31055520af949a74d491249a03d8"} Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.534497 4782 generic.go:334] "Generic (PLEG): container finished" podID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerID="7a907be3c666b2b37f0f47822c1f99e06b350871caf9dd81a1bd3196d758a463" exitCode=0 Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.534563 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerDied","Data":"7a907be3c666b2b37f0f47822c1f99e06b350871caf9dd81a1bd3196d758a463"} Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.534606 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerStarted","Data":"c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3"} Feb 02 11:46:34 crc kubenswrapper[4782]: I0202 11:46:34.554591 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" podStartSLOduration=1.430717584 podStartE2EDuration="15.554569485s" podCreationTimestamp="2026-02-02 11:46:19 +0000 UTC" firstStartedPulling="2026-02-02 11:46:19.54877563 +0000 UTC m=+4059.432968346" lastFinishedPulling="2026-02-02 11:46:33.672627521 +0000 UTC m=+4073.556820247" observedRunningTime="2026-02-02 11:46:34.545630898 +0000 UTC m=+4074.429823624" watchObservedRunningTime="2026-02-02 11:46:34.554569485 +0000 UTC m=+4074.438762201" Feb 02 11:46:36 crc kubenswrapper[4782]: I0202 11:46:36.554965 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerStarted","Data":"1aeda40de7d8e1240fab34ca169576dbc672ec2fdd84a6c2c6ea829662df6243"} Feb 02 11:46:42 crc kubenswrapper[4782]: I0202 11:46:42.619455 4782 generic.go:334] "Generic (PLEG): container finished" podID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerID="1aeda40de7d8e1240fab34ca169576dbc672ec2fdd84a6c2c6ea829662df6243" exitCode=0 Feb 02 11:46:42 crc kubenswrapper[4782]: I0202 11:46:42.619535 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerDied","Data":"1aeda40de7d8e1240fab34ca169576dbc672ec2fdd84a6c2c6ea829662df6243"} Feb 02 11:46:42 crc kubenswrapper[4782]: I0202 11:46:42.822552 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:42 crc kubenswrapper[4782]: E0202 11:46:42.822877 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:49 crc kubenswrapper[4782]: I0202 11:46:49.701501 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerStarted","Data":"1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8"} Feb 02 11:46:49 crc kubenswrapper[4782]: I0202 11:46:49.728226 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ghfkv" podStartSLOduration=10.704945434999999 podStartE2EDuration="24.728200579s" podCreationTimestamp="2026-02-02 11:46:25 +0000 UTC" firstStartedPulling="2026-02-02 11:46:34.540236903 +0000 UTC m=+4074.424429619" lastFinishedPulling="2026-02-02 11:46:48.563492047 +0000 UTC m=+4088.447684763" observedRunningTime="2026-02-02 11:46:49.724228265 +0000 UTC m=+4089.608420991" watchObservedRunningTime="2026-02-02 11:46:49.728200579 +0000 UTC m=+4089.612393305" Feb 02 11:46:54 crc kubenswrapper[4782]: I0202 11:46:54.822485 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:46:54 crc kubenswrapper[4782]: E0202 11:46:54.824004 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:46:55 crc kubenswrapper[4782]: I0202 11:46:55.909935 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:55 crc kubenswrapper[4782]: I0202 11:46:55.910329 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:46:57 crc kubenswrapper[4782]: I0202 11:46:57.334473 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" probeResult="failure" output=< Feb 02 11:46:57 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:46:57 crc kubenswrapper[4782]: > Feb 02 11:47:06 crc kubenswrapper[4782]: I0202 11:47:06.984599 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:06 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:06 crc kubenswrapper[4782]: > Feb 02 11:47:09 crc kubenswrapper[4782]: I0202 11:47:09.821064 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:47:09 crc kubenswrapper[4782]: E0202 11:47:09.821924 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.531941 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.534241 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.555873 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.627203 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.627269 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.627377 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.729140 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.729480 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.729610 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.729840 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.730112 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.764461 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") pod \"community-operators-hxxbq\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:14 crc kubenswrapper[4782]: I0202 11:47:14.856309 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:15 crc kubenswrapper[4782]: I0202 11:47:15.566073 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:15 crc kubenswrapper[4782]: I0202 11:47:15.973702 4782 generic.go:334] "Generic (PLEG): container finished" podID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerID="b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f" exitCode=0 Feb 02 11:47:15 crc kubenswrapper[4782]: I0202 11:47:15.973871 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerDied","Data":"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f"} Feb 02 11:47:15 crc kubenswrapper[4782]: I0202 11:47:15.974001 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerStarted","Data":"b170233de7d0c6ed8e5161b038fbec63ac00d1c9bc9cf57ca0a7cd8f776a230b"} Feb 02 11:47:16 crc kubenswrapper[4782]: I0202 11:47:16.986532 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:16 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:16 crc kubenswrapper[4782]: > Feb 02 11:47:17 crc kubenswrapper[4782]: I0202 11:47:17.994928 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerStarted","Data":"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc"} Feb 02 11:47:20 crc kubenswrapper[4782]: I0202 11:47:20.024297 4782 generic.go:334] "Generic (PLEG): container finished" podID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerID="1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc" exitCode=0 Feb 02 11:47:20 crc kubenswrapper[4782]: I0202 11:47:20.024490 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerDied","Data":"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc"} Feb 02 11:47:20 crc kubenswrapper[4782]: I0202 11:47:20.830378 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:47:20 crc kubenswrapper[4782]: E0202 11:47:20.831002 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:47:22 crc kubenswrapper[4782]: I0202 11:47:22.061792 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerStarted","Data":"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5"} Feb 02 11:47:22 crc kubenswrapper[4782]: I0202 11:47:22.110477 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hxxbq" podStartSLOduration=3.298884194 podStartE2EDuration="8.11044398s" podCreationTimestamp="2026-02-02 11:47:14 +0000 UTC" firstStartedPulling="2026-02-02 11:47:15.975810354 +0000 UTC m=+4115.860003070" lastFinishedPulling="2026-02-02 11:47:20.78737014 +0000 UTC m=+4120.671562856" observedRunningTime="2026-02-02 11:47:22.095618333 +0000 UTC m=+4121.979811069" watchObservedRunningTime="2026-02-02 11:47:22.11044398 +0000 UTC m=+4121.994636696" Feb 02 11:47:24 crc kubenswrapper[4782]: I0202 11:47:24.856867 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:24 crc kubenswrapper[4782]: I0202 11:47:24.857231 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:25 crc kubenswrapper[4782]: I0202 11:47:25.927231 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hxxbq" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:25 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:25 crc kubenswrapper[4782]: > Feb 02 11:47:26 crc kubenswrapper[4782]: I0202 11:47:26.963560 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:26 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:26 crc kubenswrapper[4782]: > Feb 02 11:47:30 crc kubenswrapper[4782]: I0202 11:47:30.180524 4782 generic.go:334] "Generic (PLEG): container finished" podID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" containerID="703fb5905ac3fac00de5179c176a56d6c1ad31055520af949a74d491249a03d8" exitCode=0 Feb 02 11:47:30 crc kubenswrapper[4782]: I0202 11:47:30.181076 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" event={"ID":"7ab47ea7-89b6-4fb1-b663-5e4e26a19975","Type":"ContainerDied","Data":"703fb5905ac3fac00de5179c176a56d6c1ad31055520af949a74d491249a03d8"} Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.320206 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.358086 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-phbvf"] Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.366425 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-phbvf"] Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.420738 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") pod \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.421055 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") pod \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\" (UID: \"7ab47ea7-89b6-4fb1-b663-5e4e26a19975\") " Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.421145 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host" (OuterVolumeSpecName: "host") pod "7ab47ea7-89b6-4fb1-b663-5e4e26a19975" (UID: "7ab47ea7-89b6-4fb1-b663-5e4e26a19975"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.421666 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.427779 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj" (OuterVolumeSpecName: "kube-api-access-dhtrj") pod "7ab47ea7-89b6-4fb1-b663-5e4e26a19975" (UID: "7ab47ea7-89b6-4fb1-b663-5e4e26a19975"). InnerVolumeSpecName "kube-api-access-dhtrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.523472 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhtrj\" (UniqueName: \"kubernetes.io/projected/7ab47ea7-89b6-4fb1-b663-5e4e26a19975-kube-api-access-dhtrj\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:31 crc kubenswrapper[4782]: I0202 11:47:31.821518 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:47:31 crc kubenswrapper[4782]: E0202 11:47:31.822107 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.204026 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec97e17aa8cc0816aab9519446752a1eb1800085d33b772d42030a1969af361" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.204134 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-phbvf" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.593193 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-68p8l"] Feb 02 11:47:32 crc kubenswrapper[4782]: E0202 11:47:32.593682 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" containerName="container-00" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.593699 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" containerName="container-00" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.593933 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" containerName="container-00" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.594770 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.647882 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.648186 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.750493 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.750622 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.750823 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.777885 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") pod \"crc-debug-68p8l\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.833204 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab47ea7-89b6-4fb1-b663-5e4e26a19975" path="/var/lib/kubelet/pods/7ab47ea7-89b6-4fb1-b663-5e4e26a19975/volumes" Feb 02 11:47:32 crc kubenswrapper[4782]: I0202 11:47:32.918822 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:32 crc kubenswrapper[4782]: W0202 11:47:32.961003 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8d9241_1979_4510_b308_3fb134dc12fe.slice/crio-86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa WatchSource:0}: Error finding container 86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa: Status 404 returned error can't find the container with id 86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa Feb 02 11:47:33 crc kubenswrapper[4782]: I0202 11:47:33.213970 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" event={"ID":"1b8d9241-1979-4510-b308-3fb134dc12fe","Type":"ContainerStarted","Data":"c42adbb6d99a6a080380d109dd83de68904cf76a54f046bc87430e3aee33f292"} Feb 02 11:47:33 crc kubenswrapper[4782]: I0202 11:47:33.214306 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" event={"ID":"1b8d9241-1979-4510-b308-3fb134dc12fe","Type":"ContainerStarted","Data":"86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa"} Feb 02 11:47:33 crc kubenswrapper[4782]: I0202 11:47:33.234530 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" podStartSLOduration=1.234506191 podStartE2EDuration="1.234506191s" podCreationTimestamp="2026-02-02 11:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:47:33.227303294 +0000 UTC m=+4133.111496020" watchObservedRunningTime="2026-02-02 11:47:33.234506191 +0000 UTC m=+4133.118698907" Feb 02 11:47:34 crc kubenswrapper[4782]: I0202 11:47:34.230245 4782 generic.go:334] "Generic (PLEG): container finished" podID="1b8d9241-1979-4510-b308-3fb134dc12fe" containerID="c42adbb6d99a6a080380d109dd83de68904cf76a54f046bc87430e3aee33f292" exitCode=0 Feb 02 11:47:34 crc kubenswrapper[4782]: I0202 11:47:34.230284 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" event={"ID":"1b8d9241-1979-4510-b308-3fb134dc12fe","Type":"ContainerDied","Data":"c42adbb6d99a6a080380d109dd83de68904cf76a54f046bc87430e3aee33f292"} Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.335402 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.366591 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-68p8l"] Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.381579 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-68p8l"] Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.407940 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") pod \"1b8d9241-1979-4510-b308-3fb134dc12fe\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.408040 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host" (OuterVolumeSpecName: "host") pod "1b8d9241-1979-4510-b308-3fb134dc12fe" (UID: "1b8d9241-1979-4510-b308-3fb134dc12fe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.408171 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") pod \"1b8d9241-1979-4510-b308-3fb134dc12fe\" (UID: \"1b8d9241-1979-4510-b308-3fb134dc12fe\") " Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.408730 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b8d9241-1979-4510-b308-3fb134dc12fe-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.415146 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s" (OuterVolumeSpecName: "kube-api-access-jwk2s") pod "1b8d9241-1979-4510-b308-3fb134dc12fe" (UID: "1b8d9241-1979-4510-b308-3fb134dc12fe"). InnerVolumeSpecName "kube-api-access-jwk2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.510965 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwk2s\" (UniqueName: \"kubernetes.io/projected/1b8d9241-1979-4510-b308-3fb134dc12fe-kube-api-access-jwk2s\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:35 crc kubenswrapper[4782]: I0202 11:47:35.914143 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hxxbq" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" probeResult="failure" output=< Feb 02 11:47:35 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:47:35 crc kubenswrapper[4782]: > Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.004006 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.079073 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.246463 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f19fb9a70e9e0c17e41758e8179772d99c0dbe300bb76056aed4a3b587c2aa" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.246476 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-68p8l" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.253736 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.782618 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-4m2mp"] Feb 02 11:47:36 crc kubenswrapper[4782]: E0202 11:47:36.784847 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8d9241-1979-4510-b308-3fb134dc12fe" containerName="container-00" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.784980 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8d9241-1979-4510-b308-3fb134dc12fe" containerName="container-00" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.785336 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8d9241-1979-4510-b308-3fb134dc12fe" containerName="container-00" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.786281 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.832428 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8d9241-1979-4510-b308-3fb134dc12fe" path="/var/lib/kubelet/pods/1b8d9241-1979-4510-b308-3fb134dc12fe/volumes" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.837110 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.837346 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.939477 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.939532 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:36 crc kubenswrapper[4782]: I0202 11:47:36.940712 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:37 crc kubenswrapper[4782]: I0202 11:47:37.254339 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ghfkv" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" containerID="cri-o://1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8" gracePeriod=2 Feb 02 11:47:37 crc kubenswrapper[4782]: I0202 11:47:37.466693 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") pod \"crc-debug-4m2mp\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:37 crc kubenswrapper[4782]: I0202 11:47:37.705006 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.348618 4782 generic.go:334] "Generic (PLEG): container finished" podID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerID="1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8" exitCode=0 Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.348765 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerDied","Data":"1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8"} Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.362767 4782 generic.go:334] "Generic (PLEG): container finished" podID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" containerID="c9a3d75498664651067c9e5dff908f0096c2e0acbb1581811fb18a849226abca" exitCode=0 Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.362806 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" event={"ID":"b4cb3d8a-7317-496a-9944-ecca54fd2e5c","Type":"ContainerDied","Data":"c9a3d75498664651067c9e5dff908f0096c2e0acbb1581811fb18a849226abca"} Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.362831 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" event={"ID":"b4cb3d8a-7317-496a-9944-ecca54fd2e5c","Type":"ContainerStarted","Data":"a8bb992f7f3bded0ed4dd05e59e214691573d431a9a3ac0ef86213ff790227cc"} Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.427815 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-4m2mp"] Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.444211 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jxb7c/crc-debug-4m2mp"] Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.673266 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.838881 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") pod \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.839023 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") pod \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.839053 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") pod \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\" (UID: \"46a2322e-7ca2-41f9-90af-bca1a4a7c157\") " Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.840271 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities" (OuterVolumeSpecName: "utilities") pod "46a2322e-7ca2-41f9-90af-bca1a4a7c157" (UID: "46a2322e-7ca2-41f9-90af-bca1a4a7c157"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.847347 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs" (OuterVolumeSpecName: "kube-api-access-4jnrs") pod "46a2322e-7ca2-41f9-90af-bca1a4a7c157" (UID: "46a2322e-7ca2-41f9-90af-bca1a4a7c157"). InnerVolumeSpecName "kube-api-access-4jnrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.943502 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.943800 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jnrs\" (UniqueName: \"kubernetes.io/projected/46a2322e-7ca2-41f9-90af-bca1a4a7c157-kube-api-access-4jnrs\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:38 crc kubenswrapper[4782]: I0202 11:47:38.973394 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46a2322e-7ca2-41f9-90af-bca1a4a7c157" (UID: "46a2322e-7ca2-41f9-90af-bca1a4a7c157"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.045401 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a2322e-7ca2-41f9-90af-bca1a4a7c157-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.374860 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghfkv" event={"ID":"46a2322e-7ca2-41f9-90af-bca1a4a7c157","Type":"ContainerDied","Data":"c06defc817045873e2732bd8892085168788497887185e9fb0e07e7f62b48ec3"} Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.374925 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghfkv" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.374951 4782 scope.go:117] "RemoveContainer" containerID="1af8e65d8c1625f20711969808b519726cb7dbec4573639b60716444c22f1ce8" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.414800 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.424347 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ghfkv"] Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.488814 4782 scope.go:117] "RemoveContainer" containerID="1aeda40de7d8e1240fab34ca169576dbc672ec2fdd84a6c2c6ea829662df6243" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.597554 4782 scope.go:117] "RemoveContainer" containerID="7a907be3c666b2b37f0f47822c1f99e06b350871caf9dd81a1bd3196d758a463" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.628628 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.757216 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") pod \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.757490 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") pod \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\" (UID: \"b4cb3d8a-7317-496a-9944-ecca54fd2e5c\") " Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.757556 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host" (OuterVolumeSpecName: "host") pod "b4cb3d8a-7317-496a-9944-ecca54fd2e5c" (UID: "b4cb3d8a-7317-496a-9944-ecca54fd2e5c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.758070 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.762103 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq" (OuterVolumeSpecName: "kube-api-access-tmvgq") pod "b4cb3d8a-7317-496a-9944-ecca54fd2e5c" (UID: "b4cb3d8a-7317-496a-9944-ecca54fd2e5c"). InnerVolumeSpecName "kube-api-access-tmvgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:39 crc kubenswrapper[4782]: I0202 11:47:39.859147 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmvgq\" (UniqueName: \"kubernetes.io/projected/b4cb3d8a-7317-496a-9944-ecca54fd2e5c-kube-api-access-tmvgq\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:40 crc kubenswrapper[4782]: I0202 11:47:40.384278 4782 scope.go:117] "RemoveContainer" containerID="c9a3d75498664651067c9e5dff908f0096c2e0acbb1581811fb18a849226abca" Feb 02 11:47:40 crc kubenswrapper[4782]: I0202 11:47:40.384736 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/crc-debug-4m2mp" Feb 02 11:47:40 crc kubenswrapper[4782]: I0202 11:47:40.834854 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" path="/var/lib/kubelet/pods/46a2322e-7ca2-41f9-90af-bca1a4a7c157/volumes" Feb 02 11:47:40 crc kubenswrapper[4782]: I0202 11:47:40.836022 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" path="/var/lib/kubelet/pods/b4cb3d8a-7317-496a-9944-ecca54fd2e5c/volumes" Feb 02 11:47:44 crc kubenswrapper[4782]: I0202 11:47:44.917862 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:44 crc kubenswrapper[4782]: I0202 11:47:44.972094 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:45 crc kubenswrapper[4782]: I0202 11:47:45.160832 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:45 crc kubenswrapper[4782]: I0202 11:47:45.820859 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:47:45 crc kubenswrapper[4782]: E0202 11:47:45.821441 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:47:46 crc kubenswrapper[4782]: I0202 11:47:46.435463 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hxxbq" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" containerID="cri-o://205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" gracePeriod=2 Feb 02 11:47:46 crc kubenswrapper[4782]: I0202 11:47:46.972905 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.016520 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") pod \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.016707 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") pod \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.016822 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") pod \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\" (UID: \"97658eff-1922-4b30-b7b4-edc0b5bc31e8\") " Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.018421 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities" (OuterVolumeSpecName: "utilities") pod "97658eff-1922-4b30-b7b4-edc0b5bc31e8" (UID: "97658eff-1922-4b30-b7b4-edc0b5bc31e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.030156 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk" (OuterVolumeSpecName: "kube-api-access-kgcsk") pod "97658eff-1922-4b30-b7b4-edc0b5bc31e8" (UID: "97658eff-1922-4b30-b7b4-edc0b5bc31e8"). InnerVolumeSpecName "kube-api-access-kgcsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.081308 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97658eff-1922-4b30-b7b4-edc0b5bc31e8" (UID: "97658eff-1922-4b30-b7b4-edc0b5bc31e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.119761 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.119806 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgcsk\" (UniqueName: \"kubernetes.io/projected/97658eff-1922-4b30-b7b4-edc0b5bc31e8-kube-api-access-kgcsk\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.119819 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97658eff-1922-4b30-b7b4-edc0b5bc31e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446167 4782 generic.go:334] "Generic (PLEG): container finished" podID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerID="205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" exitCode=0 Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446214 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerDied","Data":"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5"} Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446240 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxxbq" event={"ID":"97658eff-1922-4b30-b7b4-edc0b5bc31e8","Type":"ContainerDied","Data":"b170233de7d0c6ed8e5161b038fbec63ac00d1c9bc9cf57ca0a7cd8f776a230b"} Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446256 4782 scope.go:117] "RemoveContainer" containerID="205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.446401 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxxbq" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.472207 4782 scope.go:117] "RemoveContainer" containerID="1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc" Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.488369 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:47 crc kubenswrapper[4782]: I0202 11:47:47.499223 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hxxbq"] Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.081930 4782 scope.go:117] "RemoveContainer" containerID="b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.130982 4782 scope.go:117] "RemoveContainer" containerID="205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" Feb 02 11:47:48 crc kubenswrapper[4782]: E0202 11:47:48.131538 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5\": container with ID starting with 205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5 not found: ID does not exist" containerID="205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.131565 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5"} err="failed to get container status \"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5\": rpc error: code = NotFound desc = could not find container \"205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5\": container with ID starting with 205d8687bdfbda89057d5a1e1a951568315f6a2ae62187160d08cabe73be07e5 not found: ID does not exist" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.131587 4782 scope.go:117] "RemoveContainer" containerID="1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc" Feb 02 11:47:48 crc kubenswrapper[4782]: E0202 11:47:48.132068 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc\": container with ID starting with 1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc not found: ID does not exist" containerID="1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.132105 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc"} err="failed to get container status \"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc\": rpc error: code = NotFound desc = could not find container \"1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc\": container with ID starting with 1c3496aa31f0a716eeaa0e4a451ceb1e0c8382e8c166d215d6ede5e9aa49f3dc not found: ID does not exist" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.132148 4782 scope.go:117] "RemoveContainer" containerID="b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f" Feb 02 11:47:48 crc kubenswrapper[4782]: E0202 11:47:48.134179 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f\": container with ID starting with b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f not found: ID does not exist" containerID="b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.134205 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f"} err="failed to get container status \"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f\": rpc error: code = NotFound desc = could not find container \"b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f\": container with ID starting with b3701d118465d1da24b26430b890081c39992d316821ccffb985ea29413a049f not found: ID does not exist" Feb 02 11:47:48 crc kubenswrapper[4782]: I0202 11:47:48.832421 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" path="/var/lib/kubelet/pods/97658eff-1922-4b30-b7b4-edc0b5bc31e8/volumes" Feb 02 11:47:59 crc kubenswrapper[4782]: I0202 11:47:59.821185 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:48:00 crc kubenswrapper[4782]: I0202 11:48:00.578150 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0"} Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.425726 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426773 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" containerName="container-00" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426794 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" containerName="container-00" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426813 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="extract-utilities" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426823 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="extract-utilities" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426842 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="extract-content" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426850 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="extract-content" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426861 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426869 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426892 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="extract-utilities" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426901 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="extract-utilities" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426917 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="extract-content" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426925 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="extract-content" Feb 02 11:48:20 crc kubenswrapper[4782]: E0202 11:48:20.426940 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.426948 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.427164 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a2322e-7ca2-41f9-90af-bca1a4a7c157" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.427193 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="97658eff-1922-4b30-b7b4-edc0b5bc31e8" containerName="registry-server" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.427211 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cb3d8a-7317-496a-9944-ecca54fd2e5c" containerName="container-00" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.428882 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.444793 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.517200 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.517291 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.517338 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.619421 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.619514 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.619557 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.620262 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.620570 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.643590 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") pod \"certified-operators-rcmff\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:20 crc kubenswrapper[4782]: I0202 11:48:20.807024 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:21 crc kubenswrapper[4782]: I0202 11:48:21.598949 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:21 crc kubenswrapper[4782]: I0202 11:48:21.782720 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerStarted","Data":"399714c1ef96a9767b84c1c51a6986d4169addb87d97fd2f2e12e04ef6793e13"} Feb 02 11:48:22 crc kubenswrapper[4782]: I0202 11:48:22.804531 4782 generic.go:334] "Generic (PLEG): container finished" podID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerID="6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a" exitCode=0 Feb 02 11:48:22 crc kubenswrapper[4782]: I0202 11:48:22.804982 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerDied","Data":"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a"} Feb 02 11:48:23 crc kubenswrapper[4782]: I0202 11:48:23.815415 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerStarted","Data":"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b"} Feb 02 11:48:25 crc kubenswrapper[4782]: I0202 11:48:25.835665 4782 generic.go:334] "Generic (PLEG): container finished" podID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerID="9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b" exitCode=0 Feb 02 11:48:25 crc kubenswrapper[4782]: I0202 11:48:25.835704 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerDied","Data":"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b"} Feb 02 11:48:26 crc kubenswrapper[4782]: I0202 11:48:26.847301 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerStarted","Data":"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac"} Feb 02 11:48:26 crc kubenswrapper[4782]: I0202 11:48:26.875608 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rcmff" podStartSLOduration=3.436281966 podStartE2EDuration="6.875587745s" podCreationTimestamp="2026-02-02 11:48:20 +0000 UTC" firstStartedPulling="2026-02-02 11:48:22.806461198 +0000 UTC m=+4182.690653914" lastFinishedPulling="2026-02-02 11:48:26.245766977 +0000 UTC m=+4186.129959693" observedRunningTime="2026-02-02 11:48:26.866870404 +0000 UTC m=+4186.751063120" watchObservedRunningTime="2026-02-02 11:48:26.875587745 +0000 UTC m=+4186.759780461" Feb 02 11:48:30 crc kubenswrapper[4782]: I0202 11:48:30.808174 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:30 crc kubenswrapper[4782]: I0202 11:48:30.808764 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:31 crc kubenswrapper[4782]: I0202 11:48:31.005368 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:31 crc kubenswrapper[4782]: I0202 11:48:31.066094 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:31 crc kubenswrapper[4782]: I0202 11:48:31.242942 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:32 crc kubenswrapper[4782]: I0202 11:48:32.896480 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rcmff" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="registry-server" containerID="cri-o://000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" gracePeriod=2 Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.426748 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.600570 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") pod \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.601073 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") pod \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.601100 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") pod \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\" (UID: \"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e\") " Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.601561 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities" (OuterVolumeSpecName: "utilities") pod "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" (UID: "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.602168 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.607499 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg" (OuterVolumeSpecName: "kube-api-access-47nsg") pod "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" (UID: "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e"). InnerVolumeSpecName "kube-api-access-47nsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.656769 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" (UID: "e30d6af5-b4e0-4b72-b18b-d9c2daa0983e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.704291 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.704343 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47nsg\" (UniqueName: \"kubernetes.io/projected/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e-kube-api-access-47nsg\") on node \"crc\" DevicePath \"\"" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909068 4782 generic.go:334] "Generic (PLEG): container finished" podID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerID="000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" exitCode=0 Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909134 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerDied","Data":"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac"} Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909146 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcmff" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909184 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcmff" event={"ID":"e30d6af5-b4e0-4b72-b18b-d9c2daa0983e","Type":"ContainerDied","Data":"399714c1ef96a9767b84c1c51a6986d4169addb87d97fd2f2e12e04ef6793e13"} Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.909212 4782 scope.go:117] "RemoveContainer" containerID="000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.932522 4782 scope.go:117] "RemoveContainer" containerID="9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b" Feb 02 11:48:33 crc kubenswrapper[4782]: I0202 11:48:33.999774 4782 scope.go:117] "RemoveContainer" containerID="6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:33.999974 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.012883 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rcmff"] Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.032125 4782 scope.go:117] "RemoveContainer" containerID="000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" Feb 02 11:48:34 crc kubenswrapper[4782]: E0202 11:48:34.032862 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac\": container with ID starting with 000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac not found: ID does not exist" containerID="000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.032890 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac"} err="failed to get container status \"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac\": rpc error: code = NotFound desc = could not find container \"000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac\": container with ID starting with 000d80fb387e085b977dd34a41f0843fcd70d510044d1a77f38dafc386ca03ac not found: ID does not exist" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.032912 4782 scope.go:117] "RemoveContainer" containerID="9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b" Feb 02 11:48:34 crc kubenswrapper[4782]: E0202 11:48:34.033283 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b\": container with ID starting with 9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b not found: ID does not exist" containerID="9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.033300 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b"} err="failed to get container status \"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b\": rpc error: code = NotFound desc = could not find container \"9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b\": container with ID starting with 9f51f583643ec8c1d5ec29db514112e41c01030e506c0a2a6588de2cc4faf43b not found: ID does not exist" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.033316 4782 scope.go:117] "RemoveContainer" containerID="6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a" Feb 02 11:48:34 crc kubenswrapper[4782]: E0202 11:48:34.033753 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a\": container with ID starting with 6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a not found: ID does not exist" containerID="6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.033777 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a"} err="failed to get container status \"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a\": rpc error: code = NotFound desc = could not find container \"6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a\": container with ID starting with 6e72b28356a28e581d86e1c0a3f8ec522a87ce9ffe1f01e8cfd5f601b73d221a not found: ID does not exist" Feb 02 11:48:34 crc kubenswrapper[4782]: I0202 11:48:34.832901 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" path="/var/lib/kubelet/pods/e30d6af5-b4e0-4b72-b18b-d9c2daa0983e/volumes" Feb 02 11:48:36 crc kubenswrapper[4782]: I0202 11:48:36.463678 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c4d8f8d8-7qmjv_52b9ad9f-f95d-4839-9531-4f0f11ca86ff/barbican-api/0.log" Feb 02 11:48:36 crc kubenswrapper[4782]: I0202 11:48:36.678496 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c4d8f8d8-7qmjv_52b9ad9f-f95d-4839-9531-4f0f11ca86ff/barbican-api-log/0.log" Feb 02 11:48:36 crc kubenswrapper[4782]: I0202 11:48:36.834628 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b54d776c6-xrdvf_ea0f5849-bbf6-4184-8b8c-8e11cd8da661/barbican-keystone-listener/0.log" Feb 02 11:48:36 crc kubenswrapper[4782]: I0202 11:48:36.940234 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b54d776c6-xrdvf_ea0f5849-bbf6-4184-8b8c-8e11cd8da661/barbican-keystone-listener-log/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.090300 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bbfd966d5-c6jc5_141e9d68-e6ef-441d-aede-3bb1fdcc4d5f/barbican-worker/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.107556 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bbfd966d5-c6jc5_141e9d68-e6ef-441d-aede-3bb1fdcc4d5f/barbican-worker-log/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.222707 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch_14dddbe2-21a7-417a-8d21-ab97f18aef5d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.468611 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/ceilometer-notification-agent/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.507678 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/ceilometer-central-agent/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.513974 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/proxy-httpd/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.621163 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/sg-core/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.764172 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb_c0c31114-71d7-4d0b-9ad7-74945ed819e3/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:37 crc kubenswrapper[4782]: I0202 11:48:37.873766 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529_df6c52bb-3b4a-4f78-94d0-edee0f68400c/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.079160 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d71c3db-1389-4568-bb5e-c87dc6a60ddd/cinder-api/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.097394 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d71c3db-1389-4568-bb5e-c87dc6a60ddd/cinder-api-log/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.360160 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a/probe/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.411936 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a/cinder-backup/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.802866 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c35672ba-9e13-4e6d-945a-74b4cf3ee0ff/cinder-scheduler/0.log" Feb 02 11:48:38 crc kubenswrapper[4782]: I0202 11:48:38.968249 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c35672ba-9e13-4e6d-945a-74b4cf3ee0ff/probe/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.389394 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5d7df751-5d4d-4ce4-83c9-70abd18fc7c7/probe/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.447376 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf_23a1d5dc-9cfd-4c8a-8534-db3075d99574/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.478918 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5d7df751-5d4d-4ce4-83c9-70abd18fc7c7/cinder-volume/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.690242 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v56zg_6dbc340f-2b20-49aa-8358-26223d367e34/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.807418 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/init/0.log" Feb 02 11:48:39 crc kubenswrapper[4782]: I0202 11:48:39.992237 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/init/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.126945 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fdc86717-3e71-440c-a8f4-9cd4480e46d2/glance-httpd/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.150946 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/dnsmasq-dns/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.386571 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fdc86717-3e71-440c-a8f4-9cd4480e46d2/glance-log/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.425984 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c11a274-b189-4a4e-9a21-1c1d8fcc7f13/glance-httpd/0.log" Feb 02 11:48:40 crc kubenswrapper[4782]: I0202 11:48:40.462563 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c11a274-b189-4a4e-9a21-1c1d8fcc7f13/glance-log/0.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.213742 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon/1.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.345580 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon/0.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.402517 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon-log/0.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.589796 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4jg96_ae3151c2-1646-4d94-93d0-df34ad53d344/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:41 crc kubenswrapper[4782]: I0202 11:48:41.813071 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-h4png_fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.053982 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500501-wcsmz_9e752213-09b8-4c8e-a5b6-9cfbf9cea168/keystone-cron/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.059333 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79d66b847-whsks_df4aa6a3-22bf-459c-becf-3685a170ae22/keystone-api/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.204026 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6953ab25-8ddb-4ab3-b006-116f6ad534db/kube-state-metrics/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.423509 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fjczj_9b66a766-dc87-45dd-a611-d9a30c3f327e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.521971 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2af78116-7ef2-4447-b552-7b0d2eaedf90/manila-api-log/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.529074 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2af78116-7ef2-4447-b552-7b0d2eaedf90/manila-api/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.794265 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6e465ef3-3141-429f-927f-db1eabdff230/manila-scheduler/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.867473 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_04aa7a3f-6353-4317-8825-1447f8a88842/manila-share/0.log" Feb 02 11:48:42 crc kubenswrapper[4782]: I0202 11:48:42.885549 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6e465ef3-3141-429f-927f-db1eabdff230/probe/0.log" Feb 02 11:48:43 crc kubenswrapper[4782]: I0202 11:48:43.057748 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_04aa7a3f-6353-4317-8825-1447f8a88842/probe/0.log" Feb 02 11:48:43 crc kubenswrapper[4782]: I0202 11:48:43.331114 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdf8f4745-82ddm_ab6192fa-a576-411f-8083-2d6bfa57c39f/neutron-api/0.log" Feb 02 11:48:43 crc kubenswrapper[4782]: I0202 11:48:43.400081 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdf8f4745-82ddm_ab6192fa-a576-411f-8083-2d6bfa57c39f/neutron-httpd/0.log" Feb 02 11:48:43 crc kubenswrapper[4782]: I0202 11:48:43.561499 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7_e6849945-28f4-4218-97c1-6047c2d0c368/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.335467 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3797650-67c5-417c-9b38-52a581a6bbd3/nova-api-log/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.455396 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ea60fa1f-5751-4f93-8726-ce0c4be54577/nova-cell0-conductor-conductor/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.684597 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c8598880-0557-414a-bbb1-b5d0cdce0738/nova-cell1-conductor-conductor/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.778142 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3797650-67c5-417c-9b38-52a581a6bbd3/nova-api-api/0.log" Feb 02 11:48:44 crc kubenswrapper[4782]: I0202 11:48:44.839739 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_16441e1e-4564-492e-bdce-40eb2652687a/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 11:48:45 crc kubenswrapper[4782]: I0202 11:48:45.061223 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp_dc15a3e1-ea96-499f-a268-b633c15ec75b/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:45 crc kubenswrapper[4782]: I0202 11:48:45.307917 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffbaaa30-f515-494a-94af-a7a83fb44ada/nova-metadata-log/0.log" Feb 02 11:48:45 crc kubenswrapper[4782]: I0202 11:48:45.584056 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_47aff64c-0afc-4b3c-9e90-cbe926943170/nova-scheduler-scheduler/0.log" Feb 02 11:48:45 crc kubenswrapper[4782]: I0202 11:48:45.789405 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/mysql-bootstrap/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.062296 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/galera/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.119443 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/mysql-bootstrap/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.343018 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/mysql-bootstrap/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.557520 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/mysql-bootstrap/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.567496 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/galera/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.861635 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffbaaa30-f515-494a-94af-a7a83fb44ada/nova-metadata-metadata/0.log" Feb 02 11:48:46 crc kubenswrapper[4782]: I0202 11:48:46.897183 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7ed19b68-33c0-45b1-acbc-b6e9def4e565/openstackclient/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.019444 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kv4h8_c9cb1af6-ff01-4474-ad02-56938ef7e5a1/openstack-network-exporter/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.140291 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server-init/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.413687 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.427508 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server-init/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.480272 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovs-vswitchd/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.742274 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sv8l5_b009ca1c-fc93-4724-9275-c44039256469/ovn-controller/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.860262 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-sffk6_4a473fb4-7a3c-4103-bad5-570b683e6222/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:47 crc kubenswrapper[4782]: I0202 11:48:47.995547 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7a65af67-822b-44b8-a2be-a132de866a2e/ovn-northd/0.log" Feb 02 11:48:48 crc kubenswrapper[4782]: I0202 11:48:48.075338 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7a65af67-822b-44b8-a2be-a132de866a2e/openstack-network-exporter/0.log" Feb 02 11:48:48 crc kubenswrapper[4782]: I0202 11:48:48.296840 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8169f65-2d63-4127-8d23-ba6d56af1156/openstack-network-exporter/0.log" Feb 02 11:48:48 crc kubenswrapper[4782]: I0202 11:48:48.345458 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8169f65-2d63-4127-8d23-ba6d56af1156/ovsdbserver-nb/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.100512 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_572fc7c8-9560-43d0-ba3e-d3f098494878/openstack-network-exporter/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.200118 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_572fc7c8-9560-43d0-ba3e-d3f098494878/ovsdbserver-sb/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.323066 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-555cfb6c68-sntkc_9040c71d-579d-4f4e-99cf-bb76289b9aa3/placement-api/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.576587 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-555cfb6c68-sntkc_9040c71d-579d-4f4e-99cf-bb76289b9aa3/placement-log/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.706090 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/setup-container/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.876996 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/setup-container/0.log" Feb 02 11:48:49 crc kubenswrapper[4782]: I0202 11:48:49.949748 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/setup-container/0.log" Feb 02 11:48:50 crc kubenswrapper[4782]: I0202 11:48:50.035384 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/rabbitmq/0.log" Feb 02 11:48:50 crc kubenswrapper[4782]: I0202 11:48:50.746343 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/setup-container/0.log" Feb 02 11:48:50 crc kubenswrapper[4782]: I0202 11:48:50.781701 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/rabbitmq/0.log" Feb 02 11:48:50 crc kubenswrapper[4782]: I0202 11:48:50.825251 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6_cfbbb165-d7b2-48c8-b778-5c66afa9c34d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.133251 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw_6cede59e-7f51-455a-8405-3ae76f40e348/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.158584 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7pvt6_e25dd29c-ad04-40c3-a682-352af21186fe/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.530953 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-j5858_c80c4993-adf6-44f8-a084-21920191de7f/ssh-known-hosts-edpm-deployment/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.541387 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a5a266a5-ac00-49e1-9443-def4cebe65ad/tempest-tests-tempest-tests-runner/0.log" Feb 02 11:48:51 crc kubenswrapper[4782]: I0202 11:48:51.755851 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0a460d0d-7c4a-473e-9df8-ca1b1979cb25/test-operator-logs-container/0.log" Feb 02 11:48:52 crc kubenswrapper[4782]: I0202 11:48:52.344909 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr_03fa384d-760c-4c0a-b58f-91a876eeb3d7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:49:05 crc kubenswrapper[4782]: I0202 11:49:05.050504 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_17f9dd31-25b9-4b3f-82a6-12096f36308a/memcached/0.log" Feb 02 11:49:30 crc kubenswrapper[4782]: I0202 11:49:30.442183 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 11:49:30 crc kubenswrapper[4782]: I0202 11:49:30.687258 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 11:49:30 crc kubenswrapper[4782]: I0202 11:49:30.732578 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 11:49:30 crc kubenswrapper[4782]: I0202 11:49:30.761916 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.092584 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.119053 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/extract/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.142342 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.428092 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-5ngrn_0aa487d3-a703-4ed6-a44c-bc40eb8272ce/manager/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.437269 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-vj4sh_bfafd643-4798-4519-934d-8ec3e2e677d9/manager/0.log" Feb 02 11:49:31 crc kubenswrapper[4782]: I0202 11:49:31.677705 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-5vj4j_9ba082c6-4f91-48d6-b5ec-198f46abc135/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.142619 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-v7tzl_b03fe987-deab-47e7-829a-b822ab061f20/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.296323 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-fkwh5_7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.399223 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-7z5k7_224f30b2-1084-4934-8d06-67975a9776ad/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.692018 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v94dv_6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27/manager/0.log" Feb 02 11:49:32 crc kubenswrapper[4782]: I0202 11:49:32.772712 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-nsx4j_009bc68d-5c70-42ca-9008-152206fd954d/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.226973 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-scr7v_f44c1b55-d189-42dd-9187-90d9e0713790/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.250259 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-w7gld_6b276ac2-533f-43c9-94a1-f0d0e4eb6993/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.324102 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n88d6_3624e93f-9208-4f82-9f55-12381a637262/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.540055 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-l9q78_216a79cc-1b33-43f7-81ff-400a3b6f3d00/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.713756 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-v8zfh_ab3a96ec-3e51-4147-9a58-6596f2c3ad5c/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.834890 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-r9dkb_7e19a281-abaa-462e-abc7-add4acff7865/manager/0.log" Feb 02 11:49:33 crc kubenswrapper[4782]: I0202 11:49:33.998022 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf_6c7ac81b-49d3-493d-a794-1cffe78eba5e/manager/0.log" Feb 02 11:49:34 crc kubenswrapper[4782]: I0202 11:49:34.287747 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68b945c8c7-jwf5m_c12a72da-af7d-4f2e-b15d-bb90fa6bd818/operator/0.log" Feb 02 11:49:34 crc kubenswrapper[4782]: I0202 11:49:34.519429 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ml428_504a2863-da7c-4a03-b973-0f687ca20746/registry-server/0.log" Feb 02 11:49:35 crc kubenswrapper[4782]: I0202 11:49:35.033981 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-9ls2x_2f8b3b48-0c03-4922-8966-a3aaca8ebce3/manager/0.log" Feb 02 11:49:35 crc kubenswrapper[4782]: I0202 11:49:35.162431 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-dmncd_6ac6c6b4-9123-4c39-b26f-b07880c1a6c6/manager/0.log" Feb 02 11:49:35 crc kubenswrapper[4782]: I0202 11:49:35.728286 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b655fd757-r6hxp_5844bcff-6d6e-4cf4-89af-dfecfc748869/manager/0.log" Feb 02 11:49:35 crc kubenswrapper[4782]: I0202 11:49:35.871330 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jjztq_83a0d24e-3e0c-4d9a-b735-77c74ceec664/operator/0.log" Feb 02 11:49:36 crc kubenswrapper[4782]: I0202 11:49:36.117173 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-xnzl4_1661d177-41b5-4df5-886f-f3cb7abd1047/manager/0.log" Feb 02 11:49:36 crc kubenswrapper[4782]: I0202 11:49:36.322012 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ckl5m_c617a97c-fec4-418c-818a-250919ea6882/manager/0.log" Feb 02 11:49:36 crc kubenswrapper[4782]: I0202 11:49:36.381067 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-82nk8_0fd2f609-78f1-4f82-b405-35b5312baf0d/manager/0.log" Feb 02 11:49:36 crc kubenswrapper[4782]: I0202 11:49:36.593394 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-k7t28_127c9a45-7187-4afb-bb45-c34a45e67e4e/manager/0.log" Feb 02 11:50:00 crc kubenswrapper[4782]: I0202 11:50:00.090602 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wqm6f_2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1/control-plane-machine-set-operator/0.log" Feb 02 11:50:00 crc kubenswrapper[4782]: I0202 11:50:00.260970 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5br4b_063dd8d0-356e-4c11-96fd-6ecee1f28da8/kube-rbac-proxy/0.log" Feb 02 11:50:00 crc kubenswrapper[4782]: I0202 11:50:00.379508 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5br4b_063dd8d0-356e-4c11-96fd-6ecee1f28da8/machine-api-operator/0.log" Feb 02 11:50:16 crc kubenswrapper[4782]: I0202 11:50:16.270919 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vcnls_9890a2a1-2fba-4553-87eb-0b70bdc93730/cert-manager-controller/0.log" Feb 02 11:50:16 crc kubenswrapper[4782]: I0202 11:50:16.466566 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jdfqk_49141326-2954-4715-aaa9-86641ac21fa9/cert-manager-cainjector/0.log" Feb 02 11:50:16 crc kubenswrapper[4782]: I0202 11:50:16.570811 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-9h9rr_d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a/cert-manager-webhook/0.log" Feb 02 11:50:22 crc kubenswrapper[4782]: I0202 11:50:22.951835 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:50:22 crc kubenswrapper[4782]: I0202 11:50:22.953586 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:50:29 crc kubenswrapper[4782]: I0202 11:50:29.369763 4782 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 11:50:29 crc kubenswrapper[4782]: I0202 11:50:29.370383 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.338979 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5zmc7_00048f8e-9669-413d-b215-6a787d5270c0/nmstate-console-plugin/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.493820 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wjctm_3cf88c2a-32c2-4bd3-8832-b480fbfd1afe/nmstate-handler/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.635788 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djhxz_a30862c2-daa1-42d6-8815-aabc8387e789/kube-rbac-proxy/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.638275 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djhxz_a30862c2-daa1-42d6-8815-aabc8387e789/nmstate-metrics/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.817179 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pfjs6_371da653-9a38-424f-9069-14e251c45e1b/nmstate-operator/0.log" Feb 02 11:50:32 crc kubenswrapper[4782]: I0202 11:50:32.893057 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jpc2k_cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a/nmstate-webhook/0.log" Feb 02 11:50:52 crc kubenswrapper[4782]: I0202 11:50:52.951628 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:50:52 crc kubenswrapper[4782]: I0202 11:50:52.952229 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:51:04 crc kubenswrapper[4782]: I0202 11:51:04.170893 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wxfg2_1d7526eb-b4a4-4ba7-917c-cef512d2dc6a/kube-rbac-proxy/0.log" Feb 02 11:51:04 crc kubenswrapper[4782]: I0202 11:51:04.287189 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wxfg2_1d7526eb-b4a4-4ba7-917c-cef512d2dc6a/controller/0.log" Feb 02 11:51:04 crc kubenswrapper[4782]: I0202 11:51:04.471757 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.224248 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.269956 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.321749 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.383508 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.494345 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.570086 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.582147 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.639592 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.825632 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.864913 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.965093 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 11:51:05 crc kubenswrapper[4782]: I0202 11:51:05.967861 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/controller/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.243936 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/kube-rbac-proxy/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.252932 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/frr-metrics/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.267058 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/kube-rbac-proxy-frr/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.508382 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/reloader/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.571573 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8zl72_a3b12ebe-32d3-4d07-b723-64cd83951d38/frr-k8s-webhook-server/0.log" Feb 02 11:51:06 crc kubenswrapper[4782]: I0202 11:51:06.919943 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75c875dcc7-xxjwm_46c800cc-f0c4-4bb1-9714-0f9e5f904bc9/manager/0.log" Feb 02 11:51:07 crc kubenswrapper[4782]: I0202 11:51:07.023841 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-758b4c4d7b-vvspt_78f09d2d-237b-4474-b4b8-f59f49997e44/webhook-server/0.log" Feb 02 11:51:07 crc kubenswrapper[4782]: I0202 11:51:07.497311 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w7rg8_7dcb22a8-d257-446a-8264-63b33c40e24a/kube-rbac-proxy/0.log" Feb 02 11:51:07 crc kubenswrapper[4782]: I0202 11:51:07.737903 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/frr/0.log" Feb 02 11:51:07 crc kubenswrapper[4782]: I0202 11:51:07.815240 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w7rg8_7dcb22a8-d257-446a-8264-63b33c40e24a/speaker/0.log" Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.951660 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.952158 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.952204 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.953285 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:51:22 crc kubenswrapper[4782]: I0202 11:51:22.953362 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0" gracePeriod=600 Feb 02 11:51:23 crc kubenswrapper[4782]: I0202 11:51:23.840576 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0" exitCode=0 Feb 02 11:51:23 crc kubenswrapper[4782]: I0202 11:51:23.840768 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0"} Feb 02 11:51:23 crc kubenswrapper[4782]: I0202 11:51:23.841500 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d"} Feb 02 11:51:23 crc kubenswrapper[4782]: I0202 11:51:23.841660 4782 scope.go:117] "RemoveContainer" containerID="0b5a1dc843aa5e29d94449712e54fcb7833201c00028ee85179759aa66981ec6" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.382502 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.599743 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.611952 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.638269 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.847028 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 11:51:24 crc kubenswrapper[4782]: I0202 11:51:24.894051 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.040665 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/extract/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.162467 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.319972 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.366738 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.383399 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.581233 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/extract/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.585256 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 11:51:25 crc kubenswrapper[4782]: I0202 11:51:25.613659 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.373346 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.622718 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.624686 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.668228 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.837059 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 11:51:26 crc kubenswrapper[4782]: I0202 11:51:26.901016 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.136599 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.451938 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/registry-server/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.507884 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.529086 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.583732 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.760781 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 11:51:27 crc kubenswrapper[4782]: I0202 11:51:27.765945 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.160589 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wv9v8_a044a9d0-6c97-46c4-980a-e5d9940e9f74/marketplace-operator/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.357043 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.524441 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.548764 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.548860 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.630609 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/registry-server/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.829792 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.860198 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.920886 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 11:51:28 crc kubenswrapper[4782]: I0202 11:51:28.965084 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/registry-server/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.146778 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.157652 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.252809 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.396519 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 11:51:29 crc kubenswrapper[4782]: I0202 11:51:29.441492 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 11:51:30 crc kubenswrapper[4782]: I0202 11:51:30.077472 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/registry-server/0.log" Feb 02 11:52:56 crc kubenswrapper[4782]: I0202 11:52:56.124287 4782 scope.go:117] "RemoveContainer" containerID="703fb5905ac3fac00de5179c176a56d6c1ad31055520af949a74d491249a03d8" Feb 02 11:53:52 crc kubenswrapper[4782]: I0202 11:53:52.951867 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:53:52 crc kubenswrapper[4782]: I0202 11:53:52.952488 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:53:56 crc kubenswrapper[4782]: I0202 11:53:56.185272 4782 scope.go:117] "RemoveContainer" containerID="c42adbb6d99a6a080380d109dd83de68904cf76a54f046bc87430e3aee33f292" Feb 02 11:54:07 crc kubenswrapper[4782]: I0202 11:54:07.389455 4782 generic.go:334] "Generic (PLEG): container finished" podID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerID="6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394" exitCode=0 Feb 02 11:54:07 crc kubenswrapper[4782]: I0202 11:54:07.389502 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" event={"ID":"2e9c98b4-2bbb-4602-895c-b5e75a84008e","Type":"ContainerDied","Data":"6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394"} Feb 02 11:54:07 crc kubenswrapper[4782]: I0202 11:54:07.390914 4782 scope.go:117] "RemoveContainer" containerID="6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394" Feb 02 11:54:08 crc kubenswrapper[4782]: I0202 11:54:08.021706 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jxb7c_must-gather-z6fv2_2e9c98b4-2bbb-4602-895c-b5e75a84008e/gather/0.log" Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.170575 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.176975 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="copy" containerID="cri-o://834b1f7bf9c6e2dad97c21bc571b1ce20a5f5b1236b41db9e26abc8b7d95977d" gracePeriod=2 Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.191624 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jxb7c/must-gather-z6fv2"] Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.472249 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jxb7c_must-gather-z6fv2_2e9c98b4-2bbb-4602-895c-b5e75a84008e/copy/0.log" Feb 02 11:54:16 crc kubenswrapper[4782]: I0202 11:54:16.473280 4782 generic.go:334] "Generic (PLEG): container finished" podID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerID="834b1f7bf9c6e2dad97c21bc571b1ce20a5f5b1236b41db9e26abc8b7d95977d" exitCode=143 Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.557185 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jxb7c_must-gather-z6fv2_2e9c98b4-2bbb-4602-895c-b5e75a84008e/copy/0.log" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.558000 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.653335 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") pod \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.653491 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") pod \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\" (UID: \"2e9c98b4-2bbb-4602-895c-b5e75a84008e\") " Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.671518 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd" (OuterVolumeSpecName: "kube-api-access-6nkqd") pod "2e9c98b4-2bbb-4602-895c-b5e75a84008e" (UID: "2e9c98b4-2bbb-4602-895c-b5e75a84008e"). InnerVolumeSpecName "kube-api-access-6nkqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.756359 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nkqd\" (UniqueName: \"kubernetes.io/projected/2e9c98b4-2bbb-4602-895c-b5e75a84008e-kube-api-access-6nkqd\") on node \"crc\" DevicePath \"\"" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.863490 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2e9c98b4-2bbb-4602-895c-b5e75a84008e" (UID: "2e9c98b4-2bbb-4602-895c-b5e75a84008e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:54:17 crc kubenswrapper[4782]: I0202 11:54:17.960126 4782 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e9c98b4-2bbb-4602-895c-b5e75a84008e-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.492332 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jxb7c_must-gather-z6fv2_2e9c98b4-2bbb-4602-895c-b5e75a84008e/copy/0.log" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.492668 4782 scope.go:117] "RemoveContainer" containerID="834b1f7bf9c6e2dad97c21bc571b1ce20a5f5b1236b41db9e26abc8b7d95977d" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.492822 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jxb7c/must-gather-z6fv2" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.520612 4782 scope.go:117] "RemoveContainer" containerID="6e04e2bb498c17a3f4826768bd688615b9dc22330d5283bef168294c5a44a394" Feb 02 11:54:18 crc kubenswrapper[4782]: I0202 11:54:18.845100 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" path="/var/lib/kubelet/pods/2e9c98b4-2bbb-4602-895c-b5e75a84008e/volumes" Feb 02 11:54:22 crc kubenswrapper[4782]: I0202 11:54:22.950824 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:54:22 crc kubenswrapper[4782]: I0202 11:54:22.951184 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.951925 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.952489 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.952542 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.953395 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 11:54:52 crc kubenswrapper[4782]: I0202 11:54:52.953452 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" gracePeriod=600 Feb 02 11:54:53 crc kubenswrapper[4782]: E0202 11:54:53.071879 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:54:53 crc kubenswrapper[4782]: I0202 11:54:53.792380 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" exitCode=0 Feb 02 11:54:53 crc kubenswrapper[4782]: I0202 11:54:53.792464 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d"} Feb 02 11:54:53 crc kubenswrapper[4782]: I0202 11:54:53.792890 4782 scope.go:117] "RemoveContainer" containerID="81c1275290d3d86dfddaf08310d12fc19619e616245897929ae3beaa237553d0" Feb 02 11:54:53 crc kubenswrapper[4782]: I0202 11:54:53.793595 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:54:53 crc kubenswrapper[4782]: E0202 11:54:53.793993 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:06 crc kubenswrapper[4782]: I0202 11:55:06.821016 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:06 crc kubenswrapper[4782]: E0202 11:55:06.822172 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:18 crc kubenswrapper[4782]: I0202 11:55:18.822761 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:18 crc kubenswrapper[4782]: E0202 11:55:18.826306 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:31 crc kubenswrapper[4782]: I0202 11:55:31.821093 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:31 crc kubenswrapper[4782]: E0202 11:55:31.821690 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:43 crc kubenswrapper[4782]: I0202 11:55:43.822080 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:43 crc kubenswrapper[4782]: E0202 11:55:43.822887 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:55:55 crc kubenswrapper[4782]: I0202 11:55:55.820702 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:55:55 crc kubenswrapper[4782]: E0202 11:55:55.821475 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:56:09 crc kubenswrapper[4782]: I0202 11:56:09.821461 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:56:09 crc kubenswrapper[4782]: E0202 11:56:09.822410 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:56:23 crc kubenswrapper[4782]: I0202 11:56:23.821348 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:56:23 crc kubenswrapper[4782]: E0202 11:56:23.822189 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:56:36 crc kubenswrapper[4782]: I0202 11:56:36.821208 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:56:36 crc kubenswrapper[4782]: E0202 11:56:36.821949 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:56:51 crc kubenswrapper[4782]: I0202 11:56:51.821556 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:56:51 crc kubenswrapper[4782]: E0202 11:56:51.822470 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:06 crc kubenswrapper[4782]: I0202 11:57:06.821453 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:06 crc kubenswrapper[4782]: E0202 11:57:06.822345 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:18 crc kubenswrapper[4782]: I0202 11:57:18.822084 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:18 crc kubenswrapper[4782]: E0202 11:57:18.825464 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.707435 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708288 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="gather" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708300 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="gather" Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708320 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="extract-content" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708326 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="extract-content" Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708335 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="extract-utilities" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708345 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="extract-utilities" Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708363 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="copy" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708368 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="copy" Feb 02 11:57:28 crc kubenswrapper[4782]: E0202 11:57:28.708376 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="registry-server" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708382 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="registry-server" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708570 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="e30d6af5-b4e0-4b72-b18b-d9c2daa0983e" containerName="registry-server" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708587 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="gather" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.708597 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e9c98b4-2bbb-4602-895c-b5e75a84008e" containerName="copy" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.709530 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.717286 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z9thr"/"default-dockercfg-rdfls" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.717306 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z9thr"/"kube-root-ca.crt" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.717287 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z9thr"/"openshift-service-ca.crt" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.740941 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.741046 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.743288 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.844210 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.844300 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.846779 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:28 crc kubenswrapper[4782]: I0202 11:57:28.872311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") pod \"must-gather-nv9p9\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:29 crc kubenswrapper[4782]: I0202 11:57:29.040995 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 11:57:29 crc kubenswrapper[4782]: I0202 11:57:29.377248 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.277623 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/must-gather-nv9p9" event={"ID":"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb","Type":"ContainerStarted","Data":"a39aad1502d59c58497ab35252a2f69a58bda2d7e59be7d6b6e4b713820a9c05"} Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.277935 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/must-gather-nv9p9" event={"ID":"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb","Type":"ContainerStarted","Data":"ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7"} Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.277947 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/must-gather-nv9p9" event={"ID":"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb","Type":"ContainerStarted","Data":"9846aff41ff9c9c8f92d0edfc667cec9c2050ffcb980a3253c381947014a8899"} Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.296239 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z9thr/must-gather-nv9p9" podStartSLOduration=2.296219307 podStartE2EDuration="2.296219307s" podCreationTimestamp="2026-02-02 11:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:57:30.293926201 +0000 UTC m=+4730.178118917" watchObservedRunningTime="2026-02-02 11:57:30.296219307 +0000 UTC m=+4730.180412023" Feb 02 11:57:30 crc kubenswrapper[4782]: I0202 11:57:30.829689 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:30 crc kubenswrapper[4782]: E0202 11:57:30.830006 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.053022 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.057669 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.079721 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.120931 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.121289 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.121591 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.238076 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.238166 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.238376 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.239417 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.239786 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.288476 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") pod \"redhat-operators-6b6qq\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:35 crc kubenswrapper[4782]: I0202 11:57:35.380306 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.022557 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:57:36 crc kubenswrapper[4782]: W0202 11:57:36.051262 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d73ba53_5789_4d1a_aa3e_57afb54a7351.slice/crio-126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92 WatchSource:0}: Error finding container 126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92: Status 404 returned error can't find the container with id 126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92 Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.344116 4782 generic.go:334] "Generic (PLEG): container finished" podID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerID="7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f" exitCode=0 Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.344261 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerDied","Data":"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f"} Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.344430 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerStarted","Data":"126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92"} Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.346165 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.388662 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9thr/crc-debug-f65pz"] Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.390159 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.561458 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.561551 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.663667 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.663821 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.663915 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.687199 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") pod \"crc-debug-f65pz\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:36 crc kubenswrapper[4782]: I0202 11:57:36.742555 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:57:37 crc kubenswrapper[4782]: I0202 11:57:37.356578 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-f65pz" event={"ID":"ede9dce9-4392-4e23-b259-19b0c8a0bf5c","Type":"ContainerStarted","Data":"fe69ec48947e8494c70fc7778c1e7d4a1b6894069bea864a21d42aa8f068c309"} Feb 02 11:57:38 crc kubenswrapper[4782]: I0202 11:57:38.368859 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerStarted","Data":"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989"} Feb 02 11:57:38 crc kubenswrapper[4782]: I0202 11:57:38.370843 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-f65pz" event={"ID":"ede9dce9-4392-4e23-b259-19b0c8a0bf5c","Type":"ContainerStarted","Data":"7ab2da5b25910e2979891752f1231ad021c201c3354360c45d7159f4ed4df719"} Feb 02 11:57:38 crc kubenswrapper[4782]: I0202 11:57:38.417698 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z9thr/crc-debug-f65pz" podStartSLOduration=2.417675788 podStartE2EDuration="2.417675788s" podCreationTimestamp="2026-02-02 11:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 11:57:38.411730597 +0000 UTC m=+4738.295923313" watchObservedRunningTime="2026-02-02 11:57:38.417675788 +0000 UTC m=+4738.301868504" Feb 02 11:57:43 crc kubenswrapper[4782]: I0202 11:57:43.420384 4782 generic.go:334] "Generic (PLEG): container finished" podID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerID="ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989" exitCode=0 Feb 02 11:57:43 crc kubenswrapper[4782]: I0202 11:57:43.420470 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerDied","Data":"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989"} Feb 02 11:57:43 crc kubenswrapper[4782]: I0202 11:57:43.821232 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:43 crc kubenswrapper[4782]: E0202 11:57:43.821540 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:44 crc kubenswrapper[4782]: I0202 11:57:44.431412 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerStarted","Data":"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea"} Feb 02 11:57:44 crc kubenswrapper[4782]: I0202 11:57:44.466205 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6b6qq" podStartSLOduration=1.96059737 podStartE2EDuration="9.466181675s" podCreationTimestamp="2026-02-02 11:57:35 +0000 UTC" firstStartedPulling="2026-02-02 11:57:36.345900599 +0000 UTC m=+4736.230093315" lastFinishedPulling="2026-02-02 11:57:43.851484904 +0000 UTC m=+4743.735677620" observedRunningTime="2026-02-02 11:57:44.455824007 +0000 UTC m=+4744.340016743" watchObservedRunningTime="2026-02-02 11:57:44.466181675 +0000 UTC m=+4744.350374401" Feb 02 11:57:45 crc kubenswrapper[4782]: I0202 11:57:45.381427 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:45 crc kubenswrapper[4782]: I0202 11:57:45.382086 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:57:46 crc kubenswrapper[4782]: I0202 11:57:46.435798 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6b6qq" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" probeResult="failure" output=< Feb 02 11:57:46 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:57:46 crc kubenswrapper[4782]: > Feb 02 11:57:54 crc kubenswrapper[4782]: I0202 11:57:54.821745 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:57:54 crc kubenswrapper[4782]: E0202 11:57:54.823518 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:57:56 crc kubenswrapper[4782]: I0202 11:57:56.436366 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6b6qq" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" probeResult="failure" output=< Feb 02 11:57:56 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:57:56 crc kubenswrapper[4782]: > Feb 02 11:58:04 crc kubenswrapper[4782]: I0202 11:58:04.957185 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x2mbg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 11:58:04 crc kubenswrapper[4782]: I0202 11:58:04.957945 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podUID="e457712f-8cc5-4167-b074-cd8713eb9989" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 11:58:04 crc kubenswrapper[4782]: I0202 11:58:04.959429 4782 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x2mbg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 11:58:04 crc kubenswrapper[4782]: I0202 11:58:04.959583 4782 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x2mbg" podUID="e457712f-8cc5-4167-b074-cd8713eb9989" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 11:58:05 crc kubenswrapper[4782]: I0202 11:58:05.425299 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:58:05 crc kubenswrapper[4782]: I0202 11:58:05.481728 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:58:06 crc kubenswrapper[4782]: I0202 11:58:06.260185 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:58:06 crc kubenswrapper[4782]: I0202 11:58:06.821974 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:06 crc kubenswrapper[4782]: E0202 11:58:06.822485 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.054695 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6b6qq" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" containerID="cri-o://a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" gracePeriod=2 Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.534577 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.616457 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") pod \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.616816 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") pod \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.616915 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") pod \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\" (UID: \"3d73ba53-5789-4d1a-aa3e-57afb54a7351\") " Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.618275 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities" (OuterVolumeSpecName: "utilities") pod "3d73ba53-5789-4d1a-aa3e-57afb54a7351" (UID: "3d73ba53-5789-4d1a-aa3e-57afb54a7351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.670162 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9" (OuterVolumeSpecName: "kube-api-access-glrg9") pod "3d73ba53-5789-4d1a-aa3e-57afb54a7351" (UID: "3d73ba53-5789-4d1a-aa3e-57afb54a7351"). InnerVolumeSpecName "kube-api-access-glrg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.719264 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.719298 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glrg9\" (UniqueName: \"kubernetes.io/projected/3d73ba53-5789-4d1a-aa3e-57afb54a7351-kube-api-access-glrg9\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.807799 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d73ba53-5789-4d1a-aa3e-57afb54a7351" (UID: "3d73ba53-5789-4d1a-aa3e-57afb54a7351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:58:07 crc kubenswrapper[4782]: I0202 11:58:07.821146 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d73ba53-5789-4d1a-aa3e-57afb54a7351-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.066631 4782 generic.go:334] "Generic (PLEG): container finished" podID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerID="a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" exitCode=0 Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.066715 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerDied","Data":"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea"} Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.066767 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b6qq" event={"ID":"3d73ba53-5789-4d1a-aa3e-57afb54a7351","Type":"ContainerDied","Data":"126f7567a1e7d21e2380b41a0349010aa9e59e975b3daed6c27acd2897141f92"} Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.066791 4782 scope.go:117] "RemoveContainer" containerID="a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.067037 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b6qq" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.103825 4782 scope.go:117] "RemoveContainer" containerID="ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.175132 4782 scope.go:117] "RemoveContainer" containerID="7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.175618 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.207263 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6b6qq"] Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.266114 4782 scope.go:117] "RemoveContainer" containerID="a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" Feb 02 11:58:08 crc kubenswrapper[4782]: E0202 11:58:08.266516 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea\": container with ID starting with a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea not found: ID does not exist" containerID="a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.266557 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea"} err="failed to get container status \"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea\": rpc error: code = NotFound desc = could not find container \"a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea\": container with ID starting with a0d15dc15a528be29cfabeb5ff91352e3cc2a8dfe02d45694689d53e4319fcea not found: ID does not exist" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.266588 4782 scope.go:117] "RemoveContainer" containerID="ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989" Feb 02 11:58:08 crc kubenswrapper[4782]: E0202 11:58:08.268794 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989\": container with ID starting with ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989 not found: ID does not exist" containerID="ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.268859 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989"} err="failed to get container status \"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989\": rpc error: code = NotFound desc = could not find container \"ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989\": container with ID starting with ff88cb7dcc54d4f7c2bde564b79464025f34221fc1232ba8dc381fd7c47bb989 not found: ID does not exist" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.268911 4782 scope.go:117] "RemoveContainer" containerID="7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f" Feb 02 11:58:08 crc kubenswrapper[4782]: E0202 11:58:08.274872 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f\": container with ID starting with 7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f not found: ID does not exist" containerID="7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.274945 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f"} err="failed to get container status \"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f\": rpc error: code = NotFound desc = could not find container \"7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f\": container with ID starting with 7ff98d1425dfb450105bf2caadabecdec23d9e84426ee591e487f6eb97ee471f not found: ID does not exist" Feb 02 11:58:08 crc kubenswrapper[4782]: I0202 11:58:08.832681 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" path="/var/lib/kubelet/pods/3d73ba53-5789-4d1a-aa3e-57afb54a7351/volumes" Feb 02 11:58:18 crc kubenswrapper[4782]: I0202 11:58:18.153056 4782 generic.go:334] "Generic (PLEG): container finished" podID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" containerID="7ab2da5b25910e2979891752f1231ad021c201c3354360c45d7159f4ed4df719" exitCode=0 Feb 02 11:58:18 crc kubenswrapper[4782]: I0202 11:58:18.153342 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-f65pz" event={"ID":"ede9dce9-4392-4e23-b259-19b0c8a0bf5c","Type":"ContainerDied","Data":"7ab2da5b25910e2979891752f1231ad021c201c3354360c45d7159f4ed4df719"} Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.268255 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.313875 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-f65pz"] Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.325252 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-f65pz"] Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.443906 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") pod \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.443998 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") pod \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\" (UID: \"ede9dce9-4392-4e23-b259-19b0c8a0bf5c\") " Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.444500 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host" (OuterVolumeSpecName: "host") pod "ede9dce9-4392-4e23-b259-19b0c8a0bf5c" (UID: "ede9dce9-4392-4e23-b259-19b0c8a0bf5c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.463581 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz" (OuterVolumeSpecName: "kube-api-access-kc6kz") pod "ede9dce9-4392-4e23-b259-19b0c8a0bf5c" (UID: "ede9dce9-4392-4e23-b259-19b0c8a0bf5c"). InnerVolumeSpecName "kube-api-access-kc6kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.546198 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc6kz\" (UniqueName: \"kubernetes.io/projected/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-kube-api-access-kc6kz\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:19 crc kubenswrapper[4782]: I0202 11:58:19.546237 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ede9dce9-4392-4e23-b259-19b0c8a0bf5c-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.171495 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe69ec48947e8494c70fc7778c1e7d4a1b6894069bea864a21d42aa8f068c309" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.171554 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-f65pz" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.563793 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9thr/crc-debug-xnbq8"] Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.564566 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="extract-utilities" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564584 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="extract-utilities" Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.564593 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564599 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.564615 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" containerName="container-00" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564621 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" containerName="container-00" Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.564659 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="extract-content" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564666 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="extract-content" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564905 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" containerName="container-00" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.564930 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d73ba53-5789-4d1a-aa3e-57afb54a7351" containerName="registry-server" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.565610 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.666245 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.666404 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.767937 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.768112 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.768281 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.798382 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") pod \"crc-debug-xnbq8\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.850425 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.855516 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede9dce9-4392-4e23-b259-19b0c8a0bf5c" path="/var/lib/kubelet/pods/ede9dce9-4392-4e23-b259-19b0c8a0bf5c/volumes" Feb 02 11:58:20 crc kubenswrapper[4782]: E0202 11:58:20.862112 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:58:20 crc kubenswrapper[4782]: I0202 11:58:20.891134 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:21 crc kubenswrapper[4782]: W0202 11:58:21.489437 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96087c2d_a148_4006_b4d2_d02fab270407.slice/crio-682c66471470559877879166a60766767e4fa24a072636269a1a85240a45b511 WatchSource:0}: Error finding container 682c66471470559877879166a60766767e4fa24a072636269a1a85240a45b511: Status 404 returned error can't find the container with id 682c66471470559877879166a60766767e4fa24a072636269a1a85240a45b511 Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.188897 4782 generic.go:334] "Generic (PLEG): container finished" podID="96087c2d-a148-4006-b4d2-d02fab270407" containerID="a9fd7797efa31c327d9d58648ecbf7fcdf9d4bdbfe828047ee16f1309a15a956" exitCode=0 Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.189338 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" event={"ID":"96087c2d-a148-4006-b4d2-d02fab270407","Type":"ContainerDied","Data":"a9fd7797efa31c327d9d58648ecbf7fcdf9d4bdbfe828047ee16f1309a15a956"} Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.189368 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" event={"ID":"96087c2d-a148-4006-b4d2-d02fab270407","Type":"ContainerStarted","Data":"682c66471470559877879166a60766767e4fa24a072636269a1a85240a45b511"} Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.573422 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-xnbq8"] Feb 02 11:58:22 crc kubenswrapper[4782]: I0202 11:58:22.581313 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-xnbq8"] Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.360720 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.441607 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") pod \"96087c2d-a148-4006-b4d2-d02fab270407\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.441894 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") pod \"96087c2d-a148-4006-b4d2-d02fab270407\" (UID: \"96087c2d-a148-4006-b4d2-d02fab270407\") " Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.441963 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host" (OuterVolumeSpecName: "host") pod "96087c2d-a148-4006-b4d2-d02fab270407" (UID: "96087c2d-a148-4006-b4d2-d02fab270407"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.442436 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96087c2d-a148-4006-b4d2-d02fab270407-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.447269 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp" (OuterVolumeSpecName: "kube-api-access-kh4tp") pod "96087c2d-a148-4006-b4d2-d02fab270407" (UID: "96087c2d-a148-4006-b4d2-d02fab270407"). InnerVolumeSpecName "kube-api-access-kh4tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.545215 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh4tp\" (UniqueName: \"kubernetes.io/projected/96087c2d-a148-4006-b4d2-d02fab270407-kube-api-access-kh4tp\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.802043 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z9thr/crc-debug-vksdl"] Feb 02 11:58:23 crc kubenswrapper[4782]: E0202 11:58:23.802488 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96087c2d-a148-4006-b4d2-d02fab270407" containerName="container-00" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.802510 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="96087c2d-a148-4006-b4d2-d02fab270407" containerName="container-00" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.802741 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="96087c2d-a148-4006-b4d2-d02fab270407" containerName="container-00" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.803355 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.850105 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.850314 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.952376 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.952832 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.953025 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:23 crc kubenswrapper[4782]: I0202 11:58:23.986506 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") pod \"crc-debug-vksdl\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.119277 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.211381 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-vksdl" event={"ID":"edb9a738-cecb-460b-84b4-7b04cff0a2f5","Type":"ContainerStarted","Data":"b0e9df1e50ca37117d28e9ddb1f196beea94ef1d7851b2597f2f7b13a32c9ca3"} Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.215734 4782 scope.go:117] "RemoveContainer" containerID="a9fd7797efa31c327d9d58648ecbf7fcdf9d4bdbfe828047ee16f1309a15a956" Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.215800 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-xnbq8" Feb 02 11:58:24 crc kubenswrapper[4782]: I0202 11:58:24.830837 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96087c2d-a148-4006-b4d2-d02fab270407" path="/var/lib/kubelet/pods/96087c2d-a148-4006-b4d2-d02fab270407/volumes" Feb 02 11:58:25 crc kubenswrapper[4782]: I0202 11:58:25.227062 4782 generic.go:334] "Generic (PLEG): container finished" podID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" containerID="1c7ae012e4d8b0b39bc93c5bf707a4721768122f6b9013428cc6daf0c85609b7" exitCode=0 Feb 02 11:58:25 crc kubenswrapper[4782]: I0202 11:58:25.227269 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/crc-debug-vksdl" event={"ID":"edb9a738-cecb-460b-84b4-7b04cff0a2f5","Type":"ContainerDied","Data":"1c7ae012e4d8b0b39bc93c5bf707a4721768122f6b9013428cc6daf0c85609b7"} Feb 02 11:58:25 crc kubenswrapper[4782]: I0202 11:58:25.278129 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-vksdl"] Feb 02 11:58:25 crc kubenswrapper[4782]: I0202 11:58:25.287313 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9thr/crc-debug-vksdl"] Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.339527 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.417943 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") pod \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.418302 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") pod \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\" (UID: \"edb9a738-cecb-460b-84b4-7b04cff0a2f5\") " Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.419485 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host" (OuterVolumeSpecName: "host") pod "edb9a738-cecb-460b-84b4-7b04cff0a2f5" (UID: "edb9a738-cecb-460b-84b4-7b04cff0a2f5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.427970 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4" (OuterVolumeSpecName: "kube-api-access-4npz4") pod "edb9a738-cecb-460b-84b4-7b04cff0a2f5" (UID: "edb9a738-cecb-460b-84b4-7b04cff0a2f5"). InnerVolumeSpecName "kube-api-access-4npz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.520280 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4npz4\" (UniqueName: \"kubernetes.io/projected/edb9a738-cecb-460b-84b4-7b04cff0a2f5-kube-api-access-4npz4\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.520575 4782 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/edb9a738-cecb-460b-84b4-7b04cff0a2f5-host\") on node \"crc\" DevicePath \"\"" Feb 02 11:58:26 crc kubenswrapper[4782]: I0202 11:58:26.835999 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" path="/var/lib/kubelet/pods/edb9a738-cecb-460b-84b4-7b04cff0a2f5/volumes" Feb 02 11:58:27 crc kubenswrapper[4782]: I0202 11:58:27.245227 4782 scope.go:117] "RemoveContainer" containerID="1c7ae012e4d8b0b39bc93c5bf707a4721768122f6b9013428cc6daf0c85609b7" Feb 02 11:58:27 crc kubenswrapper[4782]: I0202 11:58:27.245302 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/crc-debug-vksdl" Feb 02 11:58:32 crc kubenswrapper[4782]: I0202 11:58:32.826912 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:32 crc kubenswrapper[4782]: E0202 11:58:32.827802 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:58:43 crc kubenswrapper[4782]: I0202 11:58:43.821367 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:43 crc kubenswrapper[4782]: E0202 11:58:43.822151 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:58:54 crc kubenswrapper[4782]: I0202 11:58:54.821635 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:58:54 crc kubenswrapper[4782]: E0202 11:58:54.822341 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:06 crc kubenswrapper[4782]: I0202 11:59:06.821629 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:06 crc kubenswrapper[4782]: E0202 11:59:06.822364 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:18 crc kubenswrapper[4782]: I0202 11:59:18.821512 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:18 crc kubenswrapper[4782]: E0202 11:59:18.822296 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:33 crc kubenswrapper[4782]: I0202 11:59:33.800361 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c4d8f8d8-7qmjv_52b9ad9f-f95d-4839-9531-4f0f11ca86ff/barbican-api/0.log" Feb 02 11:59:33 crc kubenswrapper[4782]: I0202 11:59:33.821213 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:33 crc kubenswrapper[4782]: E0202 11:59:33.821569 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:33 crc kubenswrapper[4782]: I0202 11:59:33.949080 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77c4d8f8d8-7qmjv_52b9ad9f-f95d-4839-9531-4f0f11ca86ff/barbican-api-log/0.log" Feb 02 11:59:33 crc kubenswrapper[4782]: I0202 11:59:33.992466 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b54d776c6-xrdvf_ea0f5849-bbf6-4184-8b8c-8e11cd8da661/barbican-keystone-listener/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.166570 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b54d776c6-xrdvf_ea0f5849-bbf6-4184-8b8c-8e11cd8da661/barbican-keystone-listener-log/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.269655 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bbfd966d5-c6jc5_141e9d68-e6ef-441d-aede-3bb1fdcc4d5f/barbican-worker/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.324092 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5bbfd966d5-c6jc5_141e9d68-e6ef-441d-aede-3bb1fdcc4d5f/barbican-worker-log/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.579145 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pdnch_14dddbe2-21a7-417a-8d21-ab97f18aef5d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.615667 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/ceilometer-central-agent/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.728988 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/ceilometer-notification-agent/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.777173 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/proxy-httpd/0.log" Feb 02 11:59:34 crc kubenswrapper[4782]: I0202 11:59:34.872097 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5cbff496-9e10-4868-ab32-849a8b238474/sg-core/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.003734 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-s65zb_c0c31114-71d7-4d0b-9ad7-74945ed819e3/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.122212 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-tp529_df6c52bb-3b4a-4f78-94d0-edee0f68400c/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.352062 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d71c3db-1389-4568-bb5e-c87dc6a60ddd/cinder-api/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.399550 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d71c3db-1389-4568-bb5e-c87dc6a60ddd/cinder-api-log/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.605927 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a/probe/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.709778 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 11:59:35 crc kubenswrapper[4782]: E0202 11:59:35.710141 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" containerName="container-00" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.710152 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" containerName="container-00" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.710359 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb9a738-cecb-460b-84b4-7b04cff0a2f5" containerName="container-00" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.711610 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.733813 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.782017 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d2b6a5bd-a9ae-4bc9-91ed-ca1ac5d7489a/cinder-backup/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.825912 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.826197 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.826304 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.880976 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c35672ba-9e13-4e6d-945a-74b4cf3ee0ff/cinder-scheduler/0.log" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.938069 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.938283 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.938664 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.941132 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.941463 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.949185 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:35 crc kubenswrapper[4782]: I0202 11:59:35.991960 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.008542 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") pod \"community-operators-wnbx7\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.019198 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.039975 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.040029 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.040073 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.041197 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.142820 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.143482 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.143756 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.143936 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.144311 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.193301 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") pod \"redhat-marketplace-6p4xn\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.242106 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c35672ba-9e13-4e6d-945a-74b4cf3ee0ff/probe/0.log" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.373881 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.767258 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5d7df751-5d4d-4ce4-83c9-70abd18fc7c7/cinder-volume/0.log" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.879811 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5d7df751-5d4d-4ce4-83c9-70abd18fc7c7/probe/0.log" Feb 02 11:59:36 crc kubenswrapper[4782]: I0202 11:59:36.967960 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.090854 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.382235 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qq7xf_23a1d5dc-9cfd-4c8a-8534-db3075d99574/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.414473 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v56zg_6dbc340f-2b20-49aa-8358-26223d367e34/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.655794 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/init/0.log" Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.846497 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerID="6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1" exitCode=0 Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.846581 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerDied","Data":"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1"} Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.846614 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerStarted","Data":"15e47b0c929ab147f64b266f97c9bf69f4e09af2e44441ef9ec8b37376116768"} Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.849905 4782 generic.go:334] "Generic (PLEG): container finished" podID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerID="8645283328e2870a1a7611b6e089ddd27bdceccde25a86c19a9a97956315b778" exitCode=0 Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.849939 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerDied","Data":"8645283328e2870a1a7611b6e089ddd27bdceccde25a86c19a9a97956315b778"} Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.849961 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerStarted","Data":"01021d3d89e76efb5bc514096c9256feb2df9b6acebd486cb66d93888f41cd54"} Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.923004 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/init/0.log" Feb 02 11:59:37 crc kubenswrapper[4782]: I0202 11:59:37.947663 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fdc86717-3e71-440c-a8f4-9cd4480e46d2/glance-httpd/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.146455 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7d98f8586f-f76zz_cfe77ae5-55f0-440b-b0af-ef3eb1637800/dnsmasq-dns/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.173446 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fdc86717-3e71-440c-a8f4-9cd4480e46d2/glance-log/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.254818 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c11a274-b189-4a4e-9a21-1c1d8fcc7f13/glance-httpd/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.487218 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c11a274-b189-4a4e-9a21-1c1d8fcc7f13/glance-log/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.518238 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon/1.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.681693 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon/0.log" Feb 02 11:59:38 crc kubenswrapper[4782]: I0202 11:59:38.885468 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerStarted","Data":"450ce907a26d4d32237c9d10b16a5c09915e418ac21d803b6dfb4ec52f5b606f"} Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.053829 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4jg96_ae3151c2-1646-4d94-93d0-df34ad53d344/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.083195 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5665456548-9x6qh_306e30f3-8fe7-427e-b8ff-309a561dda88/horizon-log/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.256249 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-h4png_fbf31fe9-54a8-4cc8-b0ef-a8076cf87c52/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.584374 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500501-wcsmz_9e752213-09b8-4c8e-a5b6-9cfbf9cea168/keystone-cron/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.606434 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79d66b847-whsks_df4aa6a3-22bf-459c-becf-3685a170ae22/keystone-api/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.875330 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6953ab25-8ddb-4ab3-b006-116f6ad534db/kube-state-metrics/0.log" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.895564 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerStarted","Data":"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3"} Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.908227 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.910421 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.944134 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.945206 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.945316 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:39 crc kubenswrapper[4782]: I0202 11:59:39.945336 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.012159 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fjczj_9b66a766-dc87-45dd-a611-d9a30c3f327e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.047954 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.048122 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.048157 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.048717 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.048780 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.092130 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") pod \"certified-operators-2r5cc\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.227799 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.337281 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2af78116-7ef2-4447-b552-7b0d2eaedf90/manila-api-log/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.497603 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_2af78116-7ef2-4447-b552-7b0d2eaedf90/manila-api/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.647961 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6e465ef3-3141-429f-927f-db1eabdff230/manila-scheduler/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.894329 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.905069 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6e465ef3-3141-429f-927f-db1eabdff230/probe/0.log" Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.910674 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerStarted","Data":"5e5029526f819ea498c14b6bbcab80d1fcb73ca7520c14a5988e7d79495012c8"} Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.914532 4782 generic.go:334] "Generic (PLEG): container finished" podID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerID="450ce907a26d4d32237c9d10b16a5c09915e418ac21d803b6dfb4ec52f5b606f" exitCode=0 Feb 02 11:59:40 crc kubenswrapper[4782]: I0202 11:59:40.914621 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerDied","Data":"450ce907a26d4d32237c9d10b16a5c09915e418ac21d803b6dfb4ec52f5b606f"} Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.176400 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_04aa7a3f-6353-4317-8825-1447f8a88842/probe/0.log" Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.223477 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_04aa7a3f-6353-4317-8825-1447f8a88842/manila-share/0.log" Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.624574 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdf8f4745-82ddm_ab6192fa-a576-411f-8083-2d6bfa57c39f/neutron-httpd/0.log" Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.924572 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerID="0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3" exitCode=0 Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.924740 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerDied","Data":"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3"} Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.931135 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerStarted","Data":"c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1"} Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.941925 4782 generic.go:334] "Generic (PLEG): container finished" podID="502a801a-3da6-4a10-9734-c302cb103c44" containerID="ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527" exitCode=0 Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.943142 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerDied","Data":"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527"} Feb 02 11:59:41 crc kubenswrapper[4782]: I0202 11:59:41.997687 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bdf8f4745-82ddm_ab6192fa-a576-411f-8083-2d6bfa57c39f/neutron-api/0.log" Feb 02 11:59:42 crc kubenswrapper[4782]: I0202 11:59:42.019062 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6p4xn" podStartSLOduration=3.502890414 podStartE2EDuration="7.019044236s" podCreationTimestamp="2026-02-02 11:59:35 +0000 UTC" firstStartedPulling="2026-02-02 11:59:37.851953209 +0000 UTC m=+4857.736145925" lastFinishedPulling="2026-02-02 11:59:41.368107041 +0000 UTC m=+4861.252299747" observedRunningTime="2026-02-02 11:59:42.017891783 +0000 UTC m=+4861.902084499" watchObservedRunningTime="2026-02-02 11:59:42.019044236 +0000 UTC m=+4861.903236952" Feb 02 11:59:42 crc kubenswrapper[4782]: I0202 11:59:42.269899 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c2sf7_e6849945-28f4-4218-97c1-6047c2d0c368/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:42 crc kubenswrapper[4782]: I0202 11:59:42.954969 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerStarted","Data":"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49"} Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.005267 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnbx7" podStartSLOduration=3.304260937 podStartE2EDuration="8.00524239s" podCreationTimestamp="2026-02-02 11:59:35 +0000 UTC" firstStartedPulling="2026-02-02 11:59:37.848344085 +0000 UTC m=+4857.732536801" lastFinishedPulling="2026-02-02 11:59:42.549325538 +0000 UTC m=+4862.433518254" observedRunningTime="2026-02-02 11:59:43.002269174 +0000 UTC m=+4862.886461890" watchObservedRunningTime="2026-02-02 11:59:43.00524239 +0000 UTC m=+4862.889435106" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.167209 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3797650-67c5-417c-9b38-52a581a6bbd3/nova-api-log/0.log" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.445902 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ea60fa1f-5751-4f93-8726-ce0c4be54577/nova-cell0-conductor-conductor/0.log" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.759502 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c3797650-67c5-417c-9b38-52a581a6bbd3/nova-api-api/0.log" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.813523 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c8598880-0557-414a-bbb1-b5d0cdce0738/nova-cell1-conductor-conductor/0.log" Feb 02 11:59:43 crc kubenswrapper[4782]: I0202 11:59:43.970314 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerStarted","Data":"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad"} Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.258114 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_16441e1e-4564-492e-bdce-40eb2652687a/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.432559 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-8gbpp_dc15a3e1-ea96-499f-a268-b633c15ec75b/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.776274 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffbaaa30-f515-494a-94af-a7a83fb44ada/nova-metadata-log/0.log" Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.981108 4782 generic.go:334] "Generic (PLEG): container finished" podID="502a801a-3da6-4a10-9734-c302cb103c44" containerID="c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad" exitCode=0 Feb 02 11:59:44 crc kubenswrapper[4782]: I0202 11:59:44.981163 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerDied","Data":"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad"} Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.267019 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/mysql-bootstrap/0.log" Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.543539 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/galera/0.log" Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.696927 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8c2fe596-a023-4206-979f-7f2e7bc81d0e/mysql-bootstrap/0.log" Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.774850 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_47aff64c-0afc-4b3c-9e90-cbe926943170/nova-scheduler-scheduler/0.log" Feb 02 11:59:45 crc kubenswrapper[4782]: I0202 11:59:45.825148 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:45 crc kubenswrapper[4782]: E0202 11:59:45.825382 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.016980 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerStarted","Data":"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f"} Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.042689 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.043249 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.045463 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2r5cc" podStartSLOduration=3.407350013 podStartE2EDuration="7.045450434s" podCreationTimestamp="2026-02-02 11:59:39 +0000 UTC" firstStartedPulling="2026-02-02 11:59:41.943954105 +0000 UTC m=+4861.828146821" lastFinishedPulling="2026-02-02 11:59:45.582054526 +0000 UTC m=+4865.466247242" observedRunningTime="2026-02-02 11:59:46.043190979 +0000 UTC m=+4865.927383705" watchObservedRunningTime="2026-02-02 11:59:46.045450434 +0000 UTC m=+4865.929643150" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.240927 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/mysql-bootstrap/0.log" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.371746 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/galera/0.log" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.374372 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.374410 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.420879 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.538818 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_827c472d-1762-4e1c-a096-2d48ca9af689/mysql-bootstrap/0.log" Feb 02 11:59:46 crc kubenswrapper[4782]: I0202 11:59:46.807678 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7ed19b68-33c0-45b1-acbc-b6e9def4e565/openstackclient/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.094863 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wnbx7" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" probeResult="failure" output=< Feb 02 11:59:47 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:59:47 crc kubenswrapper[4782]: > Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.097457 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.160497 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ffbaaa30-f515-494a-94af-a7a83fb44ada/nova-metadata-metadata/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.164618 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kv4h8_c9cb1af6-ff01-4474-ad02-56938ef7e5a1/openstack-network-exporter/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.278903 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server-init/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.496318 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server-init/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.552774 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovs-vswitchd/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.730459 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sv8l5_b009ca1c-fc93-4724-9275-c44039256469/ovn-controller/0.log" Feb 02 11:59:47 crc kubenswrapper[4782]: I0202 11:59:47.772712 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zs65k_e91c0f3d-db81-453d-ad0e-30aeadb66206/ovsdb-server/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.060008 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7a65af67-822b-44b8-a2be-a132de866a2e/openstack-network-exporter/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.095500 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-sffk6_4a473fb4-7a3c-4103-bad5-570b683e6222/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.188169 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7a65af67-822b-44b8-a2be-a132de866a2e/ovn-northd/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.398080 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8169f65-2d63-4127-8d23-ba6d56af1156/openstack-network-exporter/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.490281 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d8169f65-2d63-4127-8d23-ba6d56af1156/ovsdbserver-nb/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.688030 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_572fc7c8-9560-43d0-ba3e-d3f098494878/ovsdbserver-sb/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.689910 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_572fc7c8-9560-43d0-ba3e-d3f098494878/openstack-network-exporter/0.log" Feb 02 11:59:48 crc kubenswrapper[4782]: I0202 11:59:48.877103 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-555cfb6c68-sntkc_9040c71d-579d-4f4e-99cf-bb76289b9aa3/placement-api/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.075989 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-555cfb6c68-sntkc_9040c71d-579d-4f4e-99cf-bb76289b9aa3/placement-log/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.160825 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/setup-container/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.502973 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/rabbitmq/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.547261 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/setup-container/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.636272 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8d450a8e-fd5c-40fe-a4ff-ab265dab04df/setup-container/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.855003 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/setup-container/0.log" Feb 02 11:59:49 crc kubenswrapper[4782]: I0202 11:59:49.856616 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b5c627ac-51a8-46a5-9ccd-62072de19909/rabbitmq/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.039486 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-72nh6_cfbbb165-d7b2-48c8-b778-5c66afa9c34d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.228116 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.229062 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.251995 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-59ngw_6cede59e-7f51-455a-8405-3ae76f40e348/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.287570 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.376802 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7pvt6_e25dd29c-ad04-40c3-a682-352af21186fe/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.491630 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-j5858_c80c4993-adf6-44f8-a084-21920191de7f/ssh-known-hosts-edpm-deployment/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.709907 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a5a266a5-ac00-49e1-9443-def4cebe65ad/tempest-tests-tempest-tests-runner/0.log" Feb 02 11:59:50 crc kubenswrapper[4782]: I0202 11:59:50.901767 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0a460d0d-7c4a-473e-9df8-ca1b1979cb25/test-operator-logs-container/0.log" Feb 02 11:59:51 crc kubenswrapper[4782]: I0202 11:59:51.100164 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dg4qr_03fa384d-760c-4c0a-b58f-91a876eeb3d7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 11:59:51 crc kubenswrapper[4782]: I0202 11:59:51.119901 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:52 crc kubenswrapper[4782]: I0202 11:59:52.696390 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:52 crc kubenswrapper[4782]: I0202 11:59:52.696660 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6p4xn" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="registry-server" containerID="cri-o://c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1" gracePeriod=2 Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.124765 4782 generic.go:334] "Generic (PLEG): container finished" podID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerID="c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1" exitCode=0 Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.125213 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerDied","Data":"c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1"} Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.252179 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.317472 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") pod \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.317931 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") pod \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.318008 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") pod \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\" (UID: \"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d\") " Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.318159 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities" (OuterVolumeSpecName: "utilities") pod "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" (UID: "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.318826 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.353233 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48" (OuterVolumeSpecName: "kube-api-access-cdf48") pod "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" (UID: "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d"). InnerVolumeSpecName "kube-api-access-cdf48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.392000 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" (UID: "b1ef72bf-1fb4-445c-b98f-4140c01f1e6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.420352 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.420392 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdf48\" (UniqueName: \"kubernetes.io/projected/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d-kube-api-access-cdf48\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:53 crc kubenswrapper[4782]: I0202 11:59:53.701282 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.147143 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2r5cc" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="registry-server" containerID="cri-o://664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" gracePeriod=2 Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.147550 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p4xn" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.150057 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p4xn" event={"ID":"b1ef72bf-1fb4-445c-b98f-4140c01f1e6d","Type":"ContainerDied","Data":"01021d3d89e76efb5bc514096c9256feb2df9b6acebd486cb66d93888f41cd54"} Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.150099 4782 scope.go:117] "RemoveContainer" containerID="c03f4a921010c26cbebfcb692985c7c6ce1a9b396e0d4c57fc7b065ba36657b1" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.200027 4782 scope.go:117] "RemoveContainer" containerID="450ce907a26d4d32237c9d10b16a5c09915e418ac21d803b6dfb4ec52f5b606f" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.224440 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.241373 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p4xn"] Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.286627 4782 scope.go:117] "RemoveContainer" containerID="8645283328e2870a1a7611b6e089ddd27bdceccde25a86c19a9a97956315b778" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.735352 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.839182 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" path="/var/lib/kubelet/pods/b1ef72bf-1fb4-445c-b98f-4140c01f1e6d/volumes" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.867692 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") pod \"502a801a-3da6-4a10-9734-c302cb103c44\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.868162 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") pod \"502a801a-3da6-4a10-9734-c302cb103c44\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.868258 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") pod \"502a801a-3da6-4a10-9734-c302cb103c44\" (UID: \"502a801a-3da6-4a10-9734-c302cb103c44\") " Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.868900 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities" (OuterVolumeSpecName: "utilities") pod "502a801a-3da6-4a10-9734-c302cb103c44" (UID: "502a801a-3da6-4a10-9734-c302cb103c44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.877835 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw" (OuterVolumeSpecName: "kube-api-access-872cw") pod "502a801a-3da6-4a10-9734-c302cb103c44" (UID: "502a801a-3da6-4a10-9734-c302cb103c44"). InnerVolumeSpecName "kube-api-access-872cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.943083 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "502a801a-3da6-4a10-9734-c302cb103c44" (UID: "502a801a-3da6-4a10-9734-c302cb103c44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.970473 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.970510 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/502a801a-3da6-4a10-9734-c302cb103c44-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:54 crc kubenswrapper[4782]: I0202 11:59:54.970526 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-872cw\" (UniqueName: \"kubernetes.io/projected/502a801a-3da6-4a10-9734-c302cb103c44-kube-api-access-872cw\") on node \"crc\" DevicePath \"\"" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169668 4782 generic.go:334] "Generic (PLEG): container finished" podID="502a801a-3da6-4a10-9734-c302cb103c44" containerID="664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" exitCode=0 Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169731 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerDied","Data":"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f"} Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169758 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5cc" event={"ID":"502a801a-3da6-4a10-9734-c302cb103c44","Type":"ContainerDied","Data":"5e5029526f819ea498c14b6bbcab80d1fcb73ca7520c14a5988e7d79495012c8"} Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169764 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5cc" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.169775 4782 scope.go:117] "RemoveContainer" containerID="664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.201224 4782 scope.go:117] "RemoveContainer" containerID="c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.236812 4782 scope.go:117] "RemoveContainer" containerID="ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.241677 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.259681 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2r5cc"] Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.260896 4782 scope.go:117] "RemoveContainer" containerID="664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" Feb 02 11:59:55 crc kubenswrapper[4782]: E0202 11:59:55.263766 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f\": container with ID starting with 664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f not found: ID does not exist" containerID="664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.263807 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f"} err="failed to get container status \"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f\": rpc error: code = NotFound desc = could not find container \"664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f\": container with ID starting with 664df2c1a455a0d3fc93e4c7cf199e110e62e16c9396904cfafafe052884831f not found: ID does not exist" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.263831 4782 scope.go:117] "RemoveContainer" containerID="c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad" Feb 02 11:59:55 crc kubenswrapper[4782]: E0202 11:59:55.265812 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad\": container with ID starting with c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad not found: ID does not exist" containerID="c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.265852 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad"} err="failed to get container status \"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad\": rpc error: code = NotFound desc = could not find container \"c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad\": container with ID starting with c8947d3dae29a746aa4e9672e0b154897edb8b39e93d2d25c367fc36bdbd75ad not found: ID does not exist" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.265879 4782 scope.go:117] "RemoveContainer" containerID="ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527" Feb 02 11:59:55 crc kubenswrapper[4782]: E0202 11:59:55.268131 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527\": container with ID starting with ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527 not found: ID does not exist" containerID="ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527" Feb 02 11:59:55 crc kubenswrapper[4782]: I0202 11:59:55.268154 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527"} err="failed to get container status \"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527\": rpc error: code = NotFound desc = could not find container \"ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527\": container with ID starting with ecead2bd3b56b0288f100f95f68947d94971bddb6c51b72ba3708cd1b65c3527 not found: ID does not exist" Feb 02 11:59:56 crc kubenswrapper[4782]: I0202 11:59:56.823829 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 11:59:56 crc kubenswrapper[4782]: I0202 11:59:56.837419 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502a801a-3da6-4a10-9734-c302cb103c44" path="/var/lib/kubelet/pods/502a801a-3da6-4a10-9734-c302cb103c44/volumes" Feb 02 11:59:57 crc kubenswrapper[4782]: I0202 11:59:57.152925 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wnbx7" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" probeResult="failure" output=< Feb 02 11:59:57 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 11:59:57 crc kubenswrapper[4782]: > Feb 02 11:59:58 crc kubenswrapper[4782]: I0202 11:59:58.218943 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515"} Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.171082 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc"] Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.171989 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172005 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172020 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172026 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172045 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172052 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172067 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172074 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="extract-utilities" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172086 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172093 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="extract-content" Feb 02 12:00:00 crc kubenswrapper[4782]: E0202 12:00:00.172135 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172142 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172336 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="502a801a-3da6-4a10-9734-c302cb103c44" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.172367 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ef72bf-1fb4-445c-b98f-4140c01f1e6d" containerName="registry-server" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.173061 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.175777 4782 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.176024 4782 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.190175 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc"] Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.284553 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.284920 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.285000 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.387466 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.387750 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.387850 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.389041 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.398536 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.415771 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") pod \"collect-profiles-29500560-rg7cc\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:00 crc kubenswrapper[4782]: I0202 12:00:00.517071 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:01 crc kubenswrapper[4782]: I0202 12:00:01.079309 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc"] Feb 02 12:00:01 crc kubenswrapper[4782]: W0202 12:00:01.092205 4782 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a2b04cd_7ec4_4adc_b5a0_5bf713a82308.slice/crio-720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18 WatchSource:0}: Error finding container 720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18: Status 404 returned error can't find the container with id 720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18 Feb 02 12:00:01 crc kubenswrapper[4782]: I0202 12:00:01.263096 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" event={"ID":"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308","Type":"ContainerStarted","Data":"7a62587383e5551f5bfde9cfb7e6d6d828dbf36bd44c98586b1a92cd5268a467"} Feb 02 12:00:01 crc kubenswrapper[4782]: I0202 12:00:01.263136 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" event={"ID":"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308","Type":"ContainerStarted","Data":"720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18"} Feb 02 12:00:02 crc kubenswrapper[4782]: I0202 12:00:02.294928 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" podStartSLOduration=2.294908134 podStartE2EDuration="2.294908134s" podCreationTimestamp="2026-02-02 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:00:02.291767234 +0000 UTC m=+4882.175959950" watchObservedRunningTime="2026-02-02 12:00:02.294908134 +0000 UTC m=+4882.179100850" Feb 02 12:00:03 crc kubenswrapper[4782]: I0202 12:00:03.283003 4782 generic.go:334] "Generic (PLEG): container finished" podID="4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" containerID="7a62587383e5551f5bfde9cfb7e6d6d828dbf36bd44c98586b1a92cd5268a467" exitCode=0 Feb 02 12:00:03 crc kubenswrapper[4782]: I0202 12:00:03.283044 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" event={"ID":"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308","Type":"ContainerDied","Data":"7a62587383e5551f5bfde9cfb7e6d6d828dbf36bd44c98586b1a92cd5268a467"} Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.771493 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.798603 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") pod \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.798800 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") pod \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.798819 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") pod \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\" (UID: \"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308\") " Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.800957 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" (UID: "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.807760 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29" (OuterVolumeSpecName: "kube-api-access-snl29") pod "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" (UID: "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308"). InnerVolumeSpecName "kube-api-access-snl29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.808541 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" (UID: "4a2b04cd-7ec4-4adc-b5a0-5bf713a82308"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.900818 4782 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.900844 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snl29\" (UniqueName: \"kubernetes.io/projected/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-kube-api-access-snl29\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:04 crc kubenswrapper[4782]: I0202 12:00:04.900853 4782 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a2b04cd-7ec4-4adc-b5a0-5bf713a82308-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.302739 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" event={"ID":"4a2b04cd-7ec4-4adc-b5a0-5bf713a82308","Type":"ContainerDied","Data":"720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18"} Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.302836 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720630481ffd529549c57a3d5dc2c47adc0ae5126cef3ce8115fadcd36ee9a18" Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.302891 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500560-rg7cc" Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.393054 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 12:00:05 crc kubenswrapper[4782]: I0202 12:00:05.397999 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500515-svbzd"] Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.110492 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.166118 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.258160 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_17f9dd31-25b9-4b3f-82a6-12096f36308a/memcached/0.log" Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.834402 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49267abf-7f15-4460-bbc4-d7b0cc162817" path="/var/lib/kubelet/pods/49267abf-7f15-4460-bbc4-d7b0cc162817/volumes" Feb 02 12:00:06 crc kubenswrapper[4782]: I0202 12:00:06.915393 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.317164 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnbx7" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" containerID="cri-o://a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" gracePeriod=2 Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.807547 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.892610 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") pod \"1c2f854e-81d7-41dc-a93a-199f54f82561\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.892790 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") pod \"1c2f854e-81d7-41dc-a93a-199f54f82561\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.892850 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") pod \"1c2f854e-81d7-41dc-a93a-199f54f82561\" (UID: \"1c2f854e-81d7-41dc-a93a-199f54f82561\") " Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.902055 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities" (OuterVolumeSpecName: "utilities") pod "1c2f854e-81d7-41dc-a93a-199f54f82561" (UID: "1c2f854e-81d7-41dc-a93a-199f54f82561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.947126 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c2f854e-81d7-41dc-a93a-199f54f82561" (UID: "1c2f854e-81d7-41dc-a93a-199f54f82561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.995120 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:07 crc kubenswrapper[4782]: I0202 12:00:07.995165 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c2f854e-81d7-41dc-a93a-199f54f82561-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331810 4782 generic.go:334] "Generic (PLEG): container finished" podID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerID="a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" exitCode=0 Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331871 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerDied","Data":"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49"} Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331915 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnbx7" event={"ID":"1c2f854e-81d7-41dc-a93a-199f54f82561","Type":"ContainerDied","Data":"15e47b0c929ab147f64b266f97c9bf69f4e09af2e44441ef9ec8b37376116768"} Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331937 4782 scope.go:117] "RemoveContainer" containerID="a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.331970 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnbx7" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.357962 4782 scope.go:117] "RemoveContainer" containerID="0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.506229 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8" (OuterVolumeSpecName: "kube-api-access-hqfc8") pod "1c2f854e-81d7-41dc-a93a-199f54f82561" (UID: "1c2f854e-81d7-41dc-a93a-199f54f82561"). InnerVolumeSpecName "kube-api-access-hqfc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.507197 4782 scope.go:117] "RemoveContainer" containerID="6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.510738 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfc8\" (UniqueName: \"kubernetes.io/projected/1c2f854e-81d7-41dc-a93a-199f54f82561-kube-api-access-hqfc8\") on node \"crc\" DevicePath \"\"" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.584370 4782 scope.go:117] "RemoveContainer" containerID="a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" Feb 02 12:00:08 crc kubenswrapper[4782]: E0202 12:00:08.584847 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49\": container with ID starting with a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49 not found: ID does not exist" containerID="a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.584877 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49"} err="failed to get container status \"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49\": rpc error: code = NotFound desc = could not find container \"a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49\": container with ID starting with a2f42c530aff8fd13a9d5a982e609b0b38b90ccdf1e4145763f4434429954d49 not found: ID does not exist" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.584900 4782 scope.go:117] "RemoveContainer" containerID="0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3" Feb 02 12:00:08 crc kubenswrapper[4782]: E0202 12:00:08.585120 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3\": container with ID starting with 0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3 not found: ID does not exist" containerID="0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.585141 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3"} err="failed to get container status \"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3\": rpc error: code = NotFound desc = could not find container \"0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3\": container with ID starting with 0cc04f5ddbfaa46f26e8dc201aa5973f6caabc7737dc6a19f8d2838d9f031af3 not found: ID does not exist" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.585154 4782 scope.go:117] "RemoveContainer" containerID="6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1" Feb 02 12:00:08 crc kubenswrapper[4782]: E0202 12:00:08.585366 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1\": container with ID starting with 6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1 not found: ID does not exist" containerID="6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.585392 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1"} err="failed to get container status \"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1\": rpc error: code = NotFound desc = could not find container \"6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1\": container with ID starting with 6cd3ffd6f1e6b6383ca5e7ba4465026e7393a62b85dfecd3ce34a4d8a55f9eb1 not found: ID does not exist" Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.723806 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.738982 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnbx7"] Feb 02 12:00:08 crc kubenswrapper[4782]: I0202 12:00:08.835968 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" path="/var/lib/kubelet/pods/1c2f854e-81d7-41dc-a93a-199f54f82561/volumes" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.192099 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.450106 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.475589 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.498100 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.655737 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/util/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.666153 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/pull/0.log" Feb 02 12:00:27 crc kubenswrapper[4782]: I0202 12:00:27.740971 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_930f346981e03e3c24c980e9ea06410e1be418518114e573bd8f4b33ab2ckrh_120b307b-b163-4e00-be79-cacf3e7e84e1/extract/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.318284 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-vj4sh_bfafd643-4798-4519-934d-8ec3e2e677d9/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.350956 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-5ngrn_0aa487d3-a703-4ed6-a44c-bc40eb8272ce/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.523682 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-5vj4j_9ba082c6-4f91-48d6-b5ec-198f46abc135/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.629576 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-v7tzl_b03fe987-deab-47e7-829a-b822ab061f20/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.773589 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-fkwh5_7fa679ab-d8ad-4dae-9488-c9bbc93ae5d7/manager/0.log" Feb 02 12:00:28 crc kubenswrapper[4782]: I0202 12:00:28.880539 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-7z5k7_224f30b2-1084-4934-8d06-67975a9776ad/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.192031 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v94dv_6a74bdcf-4aaf-4fd7-b24d-7cb1d47d1f27/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.279633 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-nsx4j_009bc68d-5c70-42ca-9008-152206fd954d/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.461886 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-w7gld_6b276ac2-533f-43c9-94a1-f0d0e4eb6993/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.539447 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-scr7v_f44c1b55-d189-42dd-9187-90d9e0713790/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.730550 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n88d6_3624e93f-9208-4f82-9f55-12381a637262/manager/0.log" Feb 02 12:00:29 crc kubenswrapper[4782]: I0202 12:00:29.889594 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-l9q78_216a79cc-1b33-43f7-81ff-400a3b6f3d00/manager/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.074020 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-v8zfh_ab3a96ec-3e51-4147-9a58-6596f2c3ad5c/manager/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.096042 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-r9dkb_7e19a281-abaa-462e-abc7-add4acff7865/manager/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.324284 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtmpbf_6c7ac81b-49d3-493d-a794-1cffe78eba5e/manager/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.595616 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68b945c8c7-jwf5m_c12a72da-af7d-4f2e-b15d-bb90fa6bd818/operator/0.log" Feb 02 12:00:30 crc kubenswrapper[4782]: I0202 12:00:30.758410 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ml428_504a2863-da7c-4a03-b973-0f687ca20746/registry-server/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.107827 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-9ls2x_2f8b3b48-0c03-4922-8966-a3aaca8ebce3/manager/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.155719 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-dmncd_6ac6c6b4-9123-4c39-b26f-b07880c1a6c6/manager/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.462844 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jjztq_83a0d24e-3e0c-4d9a-b735-77c74ceec664/operator/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.547223 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-xnzl4_1661d177-41b5-4df5-886f-f3cb7abd1047/manager/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.822668 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ckl5m_c617a97c-fec4-418c-818a-250919ea6882/manager/0.log" Feb 02 12:00:31 crc kubenswrapper[4782]: I0202 12:00:31.918571 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-82nk8_0fd2f609-78f1-4f82-b405-35b5312baf0d/manager/0.log" Feb 02 12:00:32 crc kubenswrapper[4782]: I0202 12:00:32.012308 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b655fd757-r6hxp_5844bcff-6d6e-4cf4-89af-dfecfc748869/manager/0.log" Feb 02 12:00:32 crc kubenswrapper[4782]: I0202 12:00:32.074501 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-k7t28_127c9a45-7187-4afb-bb45-c34a45e67e4e/manager/0.log" Feb 02 12:00:56 crc kubenswrapper[4782]: I0202 12:00:56.449444 4782 scope.go:117] "RemoveContainer" containerID="f26205a7a090662d7627013616952d20f36db3d708e8b6aa67a214bacd583878" Feb 02 12:00:58 crc kubenswrapper[4782]: I0202 12:00:58.640874 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wqm6f_2fdb9068-c8eb-4a1d-b4ab-c3f2ed70e4c1/control-plane-machine-set-operator/0.log" Feb 02 12:00:58 crc kubenswrapper[4782]: I0202 12:00:58.693472 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5br4b_063dd8d0-356e-4c11-96fd-6ecee1f28da8/kube-rbac-proxy/0.log" Feb 02 12:00:58 crc kubenswrapper[4782]: I0202 12:00:58.787284 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5br4b_063dd8d0-356e-4c11-96fd-6ecee1f28da8/machine-api-operator/0.log" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.162591 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500561-wlnfp"] Feb 02 12:01:00 crc kubenswrapper[4782]: E0202 12:01:00.163437 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163455 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4782]: E0202 12:01:00.163468 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="extract-utilities" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163476 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="extract-utilities" Feb 02 12:01:00 crc kubenswrapper[4782]: E0202 12:01:00.163509 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163518 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" Feb 02 12:01:00 crc kubenswrapper[4782]: E0202 12:01:00.163533 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="extract-content" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163541 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="extract-content" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163813 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a2b04cd-7ec4-4adc-b5a0-5bf713a82308" containerName="collect-profiles" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.163832 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2f854e-81d7-41dc-a93a-199f54f82561" containerName="registry-server" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.172687 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.175580 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500561-wlnfp"] Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.270320 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.270476 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.270666 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.270819 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.372601 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.372726 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.372838 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.372959 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.413796 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.416447 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.417326 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.417509 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") pod \"keystone-cron-29500561-wlnfp\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:00 crc kubenswrapper[4782]: I0202 12:01:00.511693 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:01 crc kubenswrapper[4782]: I0202 12:01:01.014035 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500561-wlnfp"] Feb 02 12:01:01 crc kubenswrapper[4782]: I0202 12:01:01.756319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-wlnfp" event={"ID":"7aa564ec-fd60-4e15-a333-00c20608ec39","Type":"ContainerStarted","Data":"b7169486e5ac45b8766630850cf6d99e05138fec07cba7881059a5a96e2ff8b6"} Feb 02 12:01:01 crc kubenswrapper[4782]: I0202 12:01:01.756986 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-wlnfp" event={"ID":"7aa564ec-fd60-4e15-a333-00c20608ec39","Type":"ContainerStarted","Data":"f26055ff4300e33bae45d01c0e09490da1d2d8ba8a243f0d5e424909fd518bed"} Feb 02 12:01:01 crc kubenswrapper[4782]: I0202 12:01:01.780613 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500561-wlnfp" podStartSLOduration=1.7805892829999999 podStartE2EDuration="1.780589283s" podCreationTimestamp="2026-02-02 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 12:01:01.774992692 +0000 UTC m=+4941.659185408" watchObservedRunningTime="2026-02-02 12:01:01.780589283 +0000 UTC m=+4941.664781989" Feb 02 12:01:06 crc kubenswrapper[4782]: I0202 12:01:06.294742 4782 generic.go:334] "Generic (PLEG): container finished" podID="7aa564ec-fd60-4e15-a333-00c20608ec39" containerID="b7169486e5ac45b8766630850cf6d99e05138fec07cba7881059a5a96e2ff8b6" exitCode=0 Feb 02 12:01:06 crc kubenswrapper[4782]: I0202 12:01:06.294818 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-wlnfp" event={"ID":"7aa564ec-fd60-4e15-a333-00c20608ec39","Type":"ContainerDied","Data":"b7169486e5ac45b8766630850cf6d99e05138fec07cba7881059a5a96e2ff8b6"} Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.779742 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.927813 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") pod \"7aa564ec-fd60-4e15-a333-00c20608ec39\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.927997 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") pod \"7aa564ec-fd60-4e15-a333-00c20608ec39\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.928046 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") pod \"7aa564ec-fd60-4e15-a333-00c20608ec39\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.928124 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") pod \"7aa564ec-fd60-4e15-a333-00c20608ec39\" (UID: \"7aa564ec-fd60-4e15-a333-00c20608ec39\") " Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.965246 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft" (OuterVolumeSpecName: "kube-api-access-phzft") pod "7aa564ec-fd60-4e15-a333-00c20608ec39" (UID: "7aa564ec-fd60-4e15-a333-00c20608ec39"). InnerVolumeSpecName "kube-api-access-phzft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.967314 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7aa564ec-fd60-4e15-a333-00c20608ec39" (UID: "7aa564ec-fd60-4e15-a333-00c20608ec39"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:07 crc kubenswrapper[4782]: I0202 12:01:07.978181 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aa564ec-fd60-4e15-a333-00c20608ec39" (UID: "7aa564ec-fd60-4e15-a333-00c20608ec39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.002904 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data" (OuterVolumeSpecName: "config-data") pod "7aa564ec-fd60-4e15-a333-00c20608ec39" (UID: "7aa564ec-fd60-4e15-a333-00c20608ec39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.030208 4782 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.030239 4782 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.030250 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phzft\" (UniqueName: \"kubernetes.io/projected/7aa564ec-fd60-4e15-a333-00c20608ec39-kube-api-access-phzft\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.030258 4782 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa564ec-fd60-4e15-a333-00c20608ec39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.310280 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500561-wlnfp" event={"ID":"7aa564ec-fd60-4e15-a333-00c20608ec39","Type":"ContainerDied","Data":"f26055ff4300e33bae45d01c0e09490da1d2d8ba8a243f0d5e424909fd518bed"} Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.310320 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f26055ff4300e33bae45d01c0e09490da1d2d8ba8a243f0d5e424909fd518bed" Feb 02 12:01:08 crc kubenswrapper[4782]: I0202 12:01:08.310350 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500561-wlnfp" Feb 02 12:01:15 crc kubenswrapper[4782]: I0202 12:01:15.271340 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vcnls_9890a2a1-2fba-4553-87eb-0b70bdc93730/cert-manager-controller/0.log" Feb 02 12:01:16 crc kubenswrapper[4782]: I0202 12:01:16.109722 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-9h9rr_d3ae0a8e-231d-4be5-aa1e-ac35dfbabe4a/cert-manager-webhook/0.log" Feb 02 12:01:16 crc kubenswrapper[4782]: I0202 12:01:16.209489 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jdfqk_49141326-2954-4715-aaa9-86641ac21fa9/cert-manager-cainjector/0.log" Feb 02 12:01:29 crc kubenswrapper[4782]: I0202 12:01:29.626635 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5zmc7_00048f8e-9669-413d-b215-6a787d5270c0/nmstate-console-plugin/0.log" Feb 02 12:01:29 crc kubenswrapper[4782]: I0202 12:01:29.822530 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wjctm_3cf88c2a-32c2-4bd3-8832-b480fbfd1afe/nmstate-handler/0.log" Feb 02 12:01:29 crc kubenswrapper[4782]: I0202 12:01:29.940052 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djhxz_a30862c2-daa1-42d6-8815-aabc8387e789/kube-rbac-proxy/0.log" Feb 02 12:01:29 crc kubenswrapper[4782]: I0202 12:01:29.997879 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-djhxz_a30862c2-daa1-42d6-8815-aabc8387e789/nmstate-metrics/0.log" Feb 02 12:01:30 crc kubenswrapper[4782]: I0202 12:01:30.071302 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pfjs6_371da653-9a38-424f-9069-14e251c45e1b/nmstate-operator/0.log" Feb 02 12:01:30 crc kubenswrapper[4782]: I0202 12:01:30.241238 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jpc2k_cbf5ad9f-00e3-4b3b-b9b3-37b49e909c7a/nmstate-webhook/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.408619 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wxfg2_1d7526eb-b4a4-4ba7-917c-cef512d2dc6a/controller/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.411149 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wxfg2_1d7526eb-b4a4-4ba7-917c-cef512d2dc6a/kube-rbac-proxy/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.663803 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.801102 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.857922 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.897249 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 12:01:59 crc kubenswrapper[4782]: I0202 12:01:59.919375 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.091276 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.141058 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.145145 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.145446 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.394022 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-frr-files/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.396357 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.415070 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/cp-reloader/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.436342 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/controller/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.576232 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/frr-metrics/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.663707 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/kube-rbac-proxy-frr/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.703681 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/kube-rbac-proxy/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.879515 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/reloader/0.log" Feb 02 12:02:00 crc kubenswrapper[4782]: I0202 12:02:00.957591 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8zl72_a3b12ebe-32d3-4d07-b723-64cd83951d38/frr-k8s-webhook-server/0.log" Feb 02 12:02:01 crc kubenswrapper[4782]: I0202 12:02:01.306167 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75c875dcc7-xxjwm_46c800cc-f0c4-4bb1-9714-0f9e5f904bc9/manager/0.log" Feb 02 12:02:01 crc kubenswrapper[4782]: I0202 12:02:01.521058 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-758b4c4d7b-vvspt_78f09d2d-237b-4474-b4b8-f59f49997e44/webhook-server/0.log" Feb 02 12:02:01 crc kubenswrapper[4782]: I0202 12:02:01.619767 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w7rg8_7dcb22a8-d257-446a-8264-63b33c40e24a/kube-rbac-proxy/0.log" Feb 02 12:02:02 crc kubenswrapper[4782]: I0202 12:02:02.135777 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-s297l_ef8673fb-6fdf-4c32-a573-3583f4188d97/frr/0.log" Feb 02 12:02:02 crc kubenswrapper[4782]: I0202 12:02:02.304893 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w7rg8_7dcb22a8-d257-446a-8264-63b33c40e24a/speaker/0.log" Feb 02 12:02:17 crc kubenswrapper[4782]: I0202 12:02:17.532999 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 12:02:17 crc kubenswrapper[4782]: I0202 12:02:17.774337 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 12:02:17 crc kubenswrapper[4782]: I0202 12:02:17.787021 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 12:02:17 crc kubenswrapper[4782]: I0202 12:02:17.829430 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.063875 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/pull/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.078485 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/util/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.153882 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgnpv6_499d9fd2-e479-4774-ad4b-aaefa3ac9026/extract/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.587227 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.814262 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.859065 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 12:02:18 crc kubenswrapper[4782]: I0202 12:02:18.875366 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.124427 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/pull/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.141085 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/util/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.141317 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713l46mq_c86f666c-8701-45f8-a488-85b4052a02db/extract/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.419032 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.606031 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.628947 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.693931 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.924235 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-utilities/0.log" Feb 02 12:02:19 crc kubenswrapper[4782]: I0202 12:02:19.926884 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/extract-content/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.213246 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.624341 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vnt75_c80d3e09-03c8-40f0-a4dd-474da2b5d31d/registry-server/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.631375 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.660980 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.676801 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.857758 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-utilities/0.log" Feb 02 12:02:20 crc kubenswrapper[4782]: I0202 12:02:20.859312 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/extract-content/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.146893 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wv9v8_a044a9d0-6c97-46c4-980a-e5d9940e9f74/marketplace-operator/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.364119 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.642463 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.691358 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qsk6j_a435172a-875e-47e1-8c17-fad9fe2a0baf/registry-server/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.779807 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 12:02:21 crc kubenswrapper[4782]: I0202 12:02:21.782856 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.046257 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-utilities/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.106739 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/extract-content/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.258432 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-g864k_9e046c0e-cea4-45b0-8952-1fc5edb01ff5/registry-server/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.380740 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.861219 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.927287 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.951780 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.951845 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:02:22 crc kubenswrapper[4782]: I0202 12:02:22.968454 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 12:02:23 crc kubenswrapper[4782]: I0202 12:02:23.142464 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-utilities/0.log" Feb 02 12:02:23 crc kubenswrapper[4782]: I0202 12:02:23.639715 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/extract-content/0.log" Feb 02 12:02:24 crc kubenswrapper[4782]: I0202 12:02:24.253015 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h2hxh_fe57942f-8b6f-4400-8ed5-6fb054a514bf/registry-server/0.log" Feb 02 12:02:52 crc kubenswrapper[4782]: I0202 12:02:52.951516 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:02:52 crc kubenswrapper[4782]: I0202 12:02:52.952213 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.951326 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.952054 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.952113 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.953277 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:03:22 crc kubenswrapper[4782]: I0202 12:03:22.953444 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515" gracePeriod=600 Feb 02 12:03:23 crc kubenswrapper[4782]: I0202 12:03:23.499221 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515" exitCode=0 Feb 02 12:03:23 crc kubenswrapper[4782]: I0202 12:03:23.499302 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515"} Feb 02 12:03:23 crc kubenswrapper[4782]: I0202 12:03:23.499527 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerStarted","Data":"379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640"} Feb 02 12:03:23 crc kubenswrapper[4782]: I0202 12:03:23.499550 4782 scope.go:117] "RemoveContainer" containerID="c4afb04fcac6d851963a75d7989d5b1b2023415817f09615bbb44452a14cc85d" Feb 02 12:03:56 crc kubenswrapper[4782]: I0202 12:03:56.584277 4782 scope.go:117] "RemoveContainer" containerID="7ab2da5b25910e2979891752f1231ad021c201c3354360c45d7159f4ed4df719" Feb 02 12:05:01 crc kubenswrapper[4782]: I0202 12:05:01.864878 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerID="ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7" exitCode=0 Feb 02 12:05:01 crc kubenswrapper[4782]: I0202 12:05:01.865038 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z9thr/must-gather-nv9p9" event={"ID":"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb","Type":"ContainerDied","Data":"ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7"} Feb 02 12:05:01 crc kubenswrapper[4782]: I0202 12:05:01.866037 4782 scope.go:117] "RemoveContainer" containerID="ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7" Feb 02 12:05:02 crc kubenswrapper[4782]: I0202 12:05:02.899269 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9thr_must-gather-nv9p9_eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb/gather/0.log" Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.506274 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.506998 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z9thr/must-gather-nv9p9" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerName="copy" containerID="cri-o://a39aad1502d59c58497ab35252a2f69a58bda2d7e59be7d6b6e4b713820a9c05" gracePeriod=2 Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.518273 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z9thr/must-gather-nv9p9"] Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.986278 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9thr_must-gather-nv9p9_eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb/copy/0.log" Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.986992 4782 generic.go:334] "Generic (PLEG): container finished" podID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerID="a39aad1502d59c58497ab35252a2f69a58bda2d7e59be7d6b6e4b713820a9c05" exitCode=143 Feb 02 12:05:16 crc kubenswrapper[4782]: I0202 12:05:16.987027 4782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9846aff41ff9c9c8f92d0edfc667cec9c2050ffcb980a3253c381947014a8899" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.044068 4782 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z9thr_must-gather-nv9p9_eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb/copy/0.log" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.045122 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.206665 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") pod \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.206854 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") pod \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\" (UID: \"eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb\") " Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.400692 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" (UID: "eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.412549 4782 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.565512 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn" (OuterVolumeSpecName: "kube-api-access-2x6hn") pod "eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" (UID: "eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb"). InnerVolumeSpecName "kube-api-access-2x6hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.615197 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x6hn\" (UniqueName: \"kubernetes.io/projected/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb-kube-api-access-2x6hn\") on node \"crc\" DevicePath \"\"" Feb 02 12:05:17 crc kubenswrapper[4782]: I0202 12:05:17.994225 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z9thr/must-gather-nv9p9" Feb 02 12:05:18 crc kubenswrapper[4782]: I0202 12:05:18.832452 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" path="/var/lib/kubelet/pods/eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb/volumes" Feb 02 12:05:52 crc kubenswrapper[4782]: I0202 12:05:52.951413 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:05:52 crc kubenswrapper[4782]: I0202 12:05:52.952049 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:05:57 crc kubenswrapper[4782]: I0202 12:05:57.134799 4782 scope.go:117] "RemoveContainer" containerID="a39aad1502d59c58497ab35252a2f69a58bda2d7e59be7d6b6e4b713820a9c05" Feb 02 12:05:57 crc kubenswrapper[4782]: I0202 12:05:57.166087 4782 scope.go:117] "RemoveContainer" containerID="ba1c4a5ecb3c84062bf28e29a60c32c507ff6975efa3cf7a7cba146d6318eea7" Feb 02 12:06:22 crc kubenswrapper[4782]: I0202 12:06:22.952466 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:06:22 crc kubenswrapper[4782]: I0202 12:06:22.953083 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.950944 4782 patch_prober.go:28] interesting pod/machine-config-daemon-bhdgk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.951538 4782 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.951602 4782 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.952805 4782 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640"} pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 12:06:52 crc kubenswrapper[4782]: I0202 12:06:52.952861 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerName="machine-config-daemon" containerID="cri-o://379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" gracePeriod=600 Feb 02 12:06:53 crc kubenswrapper[4782]: E0202 12:06:53.075072 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:06:53 crc kubenswrapper[4782]: I0202 12:06:53.806901 4782 generic.go:334] "Generic (PLEG): container finished" podID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" exitCode=0 Feb 02 12:06:53 crc kubenswrapper[4782]: I0202 12:06:53.807136 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" event={"ID":"7919e98f-cc47-4f3c-9c53-6313850ea7b8","Type":"ContainerDied","Data":"379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640"} Feb 02 12:06:53 crc kubenswrapper[4782]: I0202 12:06:53.807346 4782 scope.go:117] "RemoveContainer" containerID="9febdee74f9ae9910084ea5e3d48133122e8e87f4c97f886874cc0ddea331515" Feb 02 12:06:53 crc kubenswrapper[4782]: I0202 12:06:53.808092 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:06:53 crc kubenswrapper[4782]: E0202 12:06:53.808458 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:06 crc kubenswrapper[4782]: I0202 12:07:06.826042 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:06 crc kubenswrapper[4782]: E0202 12:07:06.826787 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:20 crc kubenswrapper[4782]: I0202 12:07:20.826360 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:20 crc kubenswrapper[4782]: E0202 12:07:20.827101 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:33 crc kubenswrapper[4782]: I0202 12:07:33.821171 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:33 crc kubenswrapper[4782]: E0202 12:07:33.822987 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:44 crc kubenswrapper[4782]: I0202 12:07:44.823754 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:44 crc kubenswrapper[4782]: E0202 12:07:44.826233 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:07:57 crc kubenswrapper[4782]: I0202 12:07:57.821516 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:07:57 crc kubenswrapper[4782]: E0202 12:07:57.822448 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:08:12 crc kubenswrapper[4782]: I0202 12:08:12.821970 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:08:12 crc kubenswrapper[4782]: E0202 12:08:12.823069 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:08:27 crc kubenswrapper[4782]: I0202 12:08:27.821076 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:08:27 crc kubenswrapper[4782]: E0202 12:08:27.822059 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:08:38 crc kubenswrapper[4782]: I0202 12:08:38.821917 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:08:38 crc kubenswrapper[4782]: E0202 12:08:38.822699 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:08:51 crc kubenswrapper[4782]: I0202 12:08:51.822455 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:08:51 crc kubenswrapper[4782]: E0202 12:08:51.823346 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.571776 4782 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-97jpf"] Feb 02 12:08:52 crc kubenswrapper[4782]: E0202 12:08:52.572984 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerName="gather" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.573021 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerName="gather" Feb 02 12:08:52 crc kubenswrapper[4782]: E0202 12:08:52.573061 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerName="copy" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.573078 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerName="copy" Feb 02 12:08:52 crc kubenswrapper[4782]: E0202 12:08:52.573141 4782 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa564ec-fd60-4e15-a333-00c20608ec39" containerName="keystone-cron" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.573159 4782 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa564ec-fd60-4e15-a333-00c20608ec39" containerName="keystone-cron" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.573604 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerName="gather" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.573637 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6e713d-9e8a-4c2b-85e2-32bddf5c51bb" containerName="copy" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.573712 4782 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa564ec-fd60-4e15-a333-00c20608ec39" containerName="keystone-cron" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.576517 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.584354 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97jpf"] Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.636877 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-utilities\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.637207 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-catalog-content\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.637618 4782 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69bs8\" (UniqueName: \"kubernetes.io/projected/5920eb34-75f7-406e-ad26-e9b911490db7-kube-api-access-69bs8\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.740082 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-catalog-content\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.740258 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69bs8\" (UniqueName: \"kubernetes.io/projected/5920eb34-75f7-406e-ad26-e9b911490db7-kube-api-access-69bs8\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.740346 4782 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-utilities\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.740859 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-utilities\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.740854 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-catalog-content\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.764388 4782 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69bs8\" (UniqueName: \"kubernetes.io/projected/5920eb34-75f7-406e-ad26-e9b911490db7-kube-api-access-69bs8\") pod \"redhat-operators-97jpf\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:52 crc kubenswrapper[4782]: I0202 12:08:52.919992 4782 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:08:53 crc kubenswrapper[4782]: I0202 12:08:53.431403 4782 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-97jpf"] Feb 02 12:08:53 crc kubenswrapper[4782]: I0202 12:08:53.942894 4782 generic.go:334] "Generic (PLEG): container finished" podID="5920eb34-75f7-406e-ad26-e9b911490db7" containerID="66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad" exitCode=0 Feb 02 12:08:53 crc kubenswrapper[4782]: I0202 12:08:53.942942 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97jpf" event={"ID":"5920eb34-75f7-406e-ad26-e9b911490db7","Type":"ContainerDied","Data":"66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad"} Feb 02 12:08:53 crc kubenswrapper[4782]: I0202 12:08:53.943200 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97jpf" event={"ID":"5920eb34-75f7-406e-ad26-e9b911490db7","Type":"ContainerStarted","Data":"296ebc2e681ea3733b9ebca9bc767200dba25960154ba291eb676d4f72f46714"} Feb 02 12:08:53 crc kubenswrapper[4782]: I0202 12:08:53.946698 4782 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 12:08:55 crc kubenswrapper[4782]: I0202 12:08:55.960366 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97jpf" event={"ID":"5920eb34-75f7-406e-ad26-e9b911490db7","Type":"ContainerStarted","Data":"1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308"} Feb 02 12:08:59 crc kubenswrapper[4782]: I0202 12:08:59.995241 4782 generic.go:334] "Generic (PLEG): container finished" podID="5920eb34-75f7-406e-ad26-e9b911490db7" containerID="1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308" exitCode=0 Feb 02 12:08:59 crc kubenswrapper[4782]: I0202 12:08:59.995319 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97jpf" event={"ID":"5920eb34-75f7-406e-ad26-e9b911490db7","Type":"ContainerDied","Data":"1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308"} Feb 02 12:09:01 crc kubenswrapper[4782]: I0202 12:09:01.298823 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97jpf" event={"ID":"5920eb34-75f7-406e-ad26-e9b911490db7","Type":"ContainerStarted","Data":"d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430"} Feb 02 12:09:01 crc kubenswrapper[4782]: I0202 12:09:01.330749 4782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-97jpf" podStartSLOduration=2.74793639 podStartE2EDuration="9.330702074s" podCreationTimestamp="2026-02-02 12:08:52 +0000 UTC" firstStartedPulling="2026-02-02 12:08:53.946394884 +0000 UTC m=+5413.830587600" lastFinishedPulling="2026-02-02 12:09:00.529160568 +0000 UTC m=+5420.413353284" observedRunningTime="2026-02-02 12:09:01.314909558 +0000 UTC m=+5421.199102274" watchObservedRunningTime="2026-02-02 12:09:01.330702074 +0000 UTC m=+5421.214894780" Feb 02 12:09:02 crc kubenswrapper[4782]: I0202 12:09:02.920516 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:09:02 crc kubenswrapper[4782]: I0202 12:09:02.920898 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:09:03 crc kubenswrapper[4782]: I0202 12:09:03.821578 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:09:03 crc kubenswrapper[4782]: E0202 12:09:03.822191 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:09:03 crc kubenswrapper[4782]: I0202 12:09:03.972260 4782 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-97jpf" podUID="5920eb34-75f7-406e-ad26-e9b911490db7" containerName="registry-server" probeResult="failure" output=< Feb 02 12:09:03 crc kubenswrapper[4782]: timeout: failed to connect service ":50051" within 1s Feb 02 12:09:03 crc kubenswrapper[4782]: > Feb 02 12:09:12 crc kubenswrapper[4782]: I0202 12:09:12.967851 4782 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:09:13 crc kubenswrapper[4782]: I0202 12:09:13.017941 4782 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:09:13 crc kubenswrapper[4782]: I0202 12:09:13.214235 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97jpf"] Feb 02 12:09:14 crc kubenswrapper[4782]: I0202 12:09:14.440429 4782 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-97jpf" podUID="5920eb34-75f7-406e-ad26-e9b911490db7" containerName="registry-server" containerID="cri-o://d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430" gracePeriod=2 Feb 02 12:09:14 crc kubenswrapper[4782]: I0202 12:09:14.918676 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.070143 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69bs8\" (UniqueName: \"kubernetes.io/projected/5920eb34-75f7-406e-ad26-e9b911490db7-kube-api-access-69bs8\") pod \"5920eb34-75f7-406e-ad26-e9b911490db7\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.070288 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-catalog-content\") pod \"5920eb34-75f7-406e-ad26-e9b911490db7\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.070320 4782 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-utilities\") pod \"5920eb34-75f7-406e-ad26-e9b911490db7\" (UID: \"5920eb34-75f7-406e-ad26-e9b911490db7\") " Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.071450 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-utilities" (OuterVolumeSpecName: "utilities") pod "5920eb34-75f7-406e-ad26-e9b911490db7" (UID: "5920eb34-75f7-406e-ad26-e9b911490db7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.078435 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5920eb34-75f7-406e-ad26-e9b911490db7-kube-api-access-69bs8" (OuterVolumeSpecName: "kube-api-access-69bs8") pod "5920eb34-75f7-406e-ad26-e9b911490db7" (UID: "5920eb34-75f7-406e-ad26-e9b911490db7"). InnerVolumeSpecName "kube-api-access-69bs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.172922 4782 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.172957 4782 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69bs8\" (UniqueName: \"kubernetes.io/projected/5920eb34-75f7-406e-ad26-e9b911490db7-kube-api-access-69bs8\") on node \"crc\" DevicePath \"\"" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.198295 4782 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5920eb34-75f7-406e-ad26-e9b911490db7" (UID: "5920eb34-75f7-406e-ad26-e9b911490db7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.275278 4782 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5920eb34-75f7-406e-ad26-e9b911490db7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.451193 4782 generic.go:334] "Generic (PLEG): container finished" podID="5920eb34-75f7-406e-ad26-e9b911490db7" containerID="d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430" exitCode=0 Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.451292 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97jpf" event={"ID":"5920eb34-75f7-406e-ad26-e9b911490db7","Type":"ContainerDied","Data":"d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430"} Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.451347 4782 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-97jpf" event={"ID":"5920eb34-75f7-406e-ad26-e9b911490db7","Type":"ContainerDied","Data":"296ebc2e681ea3733b9ebca9bc767200dba25960154ba291eb676d4f72f46714"} Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.451373 4782 scope.go:117] "RemoveContainer" containerID="d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.451595 4782 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-97jpf" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.477891 4782 scope.go:117] "RemoveContainer" containerID="1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.498695 4782 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-97jpf"] Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.512387 4782 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-97jpf"] Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.513505 4782 scope.go:117] "RemoveContainer" containerID="66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.579631 4782 scope.go:117] "RemoveContainer" containerID="d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430" Feb 02 12:09:15 crc kubenswrapper[4782]: E0202 12:09:15.580422 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430\": container with ID starting with d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430 not found: ID does not exist" containerID="d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.580471 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430"} err="failed to get container status \"d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430\": rpc error: code = NotFound desc = could not find container \"d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430\": container with ID starting with d4600ab11764f055af451ee2ab21eafd3f1ea562b22882b9b1f1d6e92f7b7430 not found: ID does not exist" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.580505 4782 scope.go:117] "RemoveContainer" containerID="1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308" Feb 02 12:09:15 crc kubenswrapper[4782]: E0202 12:09:15.580933 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308\": container with ID starting with 1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308 not found: ID does not exist" containerID="1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.580955 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308"} err="failed to get container status \"1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308\": rpc error: code = NotFound desc = could not find container \"1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308\": container with ID starting with 1633dde7d7f336b731938b9611eae9d290d5cb3e8cf7c4e11dd59f6962ad3308 not found: ID does not exist" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.580972 4782 scope.go:117] "RemoveContainer" containerID="66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad" Feb 02 12:09:15 crc kubenswrapper[4782]: E0202 12:09:15.581284 4782 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad\": container with ID starting with 66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad not found: ID does not exist" containerID="66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.581306 4782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad"} err="failed to get container status \"66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad\": rpc error: code = NotFound desc = could not find container \"66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad\": container with ID starting with 66eb035ab1e47d22b6b5cefb7371487273faa641ab2559afb4c78d58662002ad not found: ID does not exist" Feb 02 12:09:15 crc kubenswrapper[4782]: I0202 12:09:15.822197 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:09:15 crc kubenswrapper[4782]: E0202 12:09:15.823208 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" Feb 02 12:09:16 crc kubenswrapper[4782]: I0202 12:09:16.834304 4782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5920eb34-75f7-406e-ad26-e9b911490db7" path="/var/lib/kubelet/pods/5920eb34-75f7-406e-ad26-e9b911490db7/volumes" Feb 02 12:09:30 crc kubenswrapper[4782]: I0202 12:09:30.828815 4782 scope.go:117] "RemoveContainer" containerID="379e1dba944d1de9bb8a49c563f91b71b17bb1a0a72b6d17fc6ff4d47dd10640" Feb 02 12:09:30 crc kubenswrapper[4782]: E0202 12:09:30.829460 4782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bhdgk_openshift-machine-config-operator(7919e98f-cc47-4f3c-9c53-6313850ea7b8)\"" pod="openshift-machine-config-operator/machine-config-daemon-bhdgk" podUID="7919e98f-cc47-4f3c-9c53-6313850ea7b8" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140112016024434 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140112017017352 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140076572016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140076572015464 5ustar corecore